kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:39:17 +08:00
|
|
|
/*
|
|
|
|
* This file contains shadow memory manipulation code.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2014 Samsung Electronics Co., Ltd.
|
|
|
|
* Author: Andrey Ryabinin <a.ryabinin@samsung.com>
|
|
|
|
*
|
|
|
|
* Some of code borrowed from https://github.com/xairy/linux by
|
|
|
|
* Andrey Konovalov <adech.fo@gmail.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
#define DISABLE_BRANCH_PROFILING
|
|
|
|
|
|
|
|
#include <linux/export.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/memblock.h>
|
2015-02-14 06:39:21 +08:00
|
|
|
#include <linux/memory.h>
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:39:17 +08:00
|
|
|
#include <linux/mm.h>
|
2015-02-14 06:40:17 +08:00
|
|
|
#include <linux/module.h>
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:39:17 +08:00
|
|
|
#include <linux/printk.h>
|
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/stacktrace.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/kasan.h>
|
|
|
|
|
|
|
|
#include "kasan.h"
|
2015-02-14 06:39:42 +08:00
|
|
|
#include "../slab.h"
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:39:17 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Poisons the shadow memory for 'size' bytes starting from 'addr'.
|
|
|
|
* Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
|
|
|
|
*/
|
|
|
|
static void kasan_poison_shadow(const void *address, size_t size, u8 value)
|
|
|
|
{
|
|
|
|
void *shadow_start, *shadow_end;
|
|
|
|
|
|
|
|
shadow_start = kasan_mem_to_shadow(address);
|
|
|
|
shadow_end = kasan_mem_to_shadow(address + size);
|
|
|
|
|
|
|
|
memset(shadow_start, value, shadow_end - shadow_start);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_unpoison_shadow(const void *address, size_t size)
|
|
|
|
{
|
|
|
|
kasan_poison_shadow(address, size, 0);
|
|
|
|
|
|
|
|
if (size & KASAN_SHADOW_MASK) {
|
|
|
|
u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
|
|
|
|
*shadow = size & KASAN_SHADOW_MASK;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* All functions below always inlined so compiler could
|
|
|
|
* perform better optimizations in each of __asan_loadX/__assn_storeX
|
|
|
|
* depending on memory access size X.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_1(unsigned long addr)
|
|
|
|
{
|
|
|
|
s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
|
|
|
|
|
|
|
|
if (unlikely(shadow_value)) {
|
|
|
|
s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
|
|
|
|
return unlikely(last_accessible_byte >= shadow_value);
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_2(unsigned long addr)
|
|
|
|
{
|
|
|
|
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
|
|
|
|
|
|
|
|
if (unlikely(*shadow_addr)) {
|
|
|
|
if (memory_is_poisoned_1(addr + 1))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return unlikely(*(u8 *)shadow_addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_4(unsigned long addr)
|
|
|
|
{
|
|
|
|
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
|
|
|
|
|
|
|
|
if (unlikely(*shadow_addr)) {
|
|
|
|
if (memory_is_poisoned_1(addr + 3))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return unlikely(*(u8 *)shadow_addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_8(unsigned long addr)
|
|
|
|
{
|
|
|
|
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
|
|
|
|
|
|
|
|
if (unlikely(*shadow_addr)) {
|
|
|
|
if (memory_is_poisoned_1(addr + 7))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return unlikely(*(u8 *)shadow_addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
|
|
|
|
{
|
|
|
|
u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
|
|
|
|
|
|
|
|
if (unlikely(*shadow_addr)) {
|
|
|
|
u16 shadow_first_bytes = *(u16 *)shadow_addr;
|
|
|
|
s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
|
|
|
|
|
|
|
|
if (unlikely(shadow_first_bytes))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (likely(!last_byte))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return memory_is_poisoned_1(addr + 15);
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline unsigned long bytes_is_zero(const u8 *start,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
while (size) {
|
|
|
|
if (unlikely(*start))
|
|
|
|
return (unsigned long)start;
|
|
|
|
start++;
|
|
|
|
size--;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline unsigned long memory_is_zero(const void *start,
|
|
|
|
const void *end)
|
|
|
|
{
|
|
|
|
unsigned int words;
|
|
|
|
unsigned long ret;
|
|
|
|
unsigned int prefix = (unsigned long)start % 8;
|
|
|
|
|
|
|
|
if (end - start <= 16)
|
|
|
|
return bytes_is_zero(start, end - start);
|
|
|
|
|
|
|
|
if (prefix) {
|
|
|
|
prefix = 8 - prefix;
|
|
|
|
ret = bytes_is_zero(start, prefix);
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
|
|
|
start += prefix;
|
|
|
|
}
|
|
|
|
|
|
|
|
words = (end - start) / 8;
|
|
|
|
while (words) {
|
|
|
|
if (unlikely(*(u64 *)start))
|
|
|
|
return bytes_is_zero(start, 8);
|
|
|
|
start += 8;
|
|
|
|
words--;
|
|
|
|
}
|
|
|
|
|
|
|
|
return bytes_is_zero(start, (end - start) % 8);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned_n(unsigned long addr,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
unsigned long ret;
|
|
|
|
|
|
|
|
ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
|
|
|
|
kasan_mem_to_shadow((void *)addr + size - 1) + 1);
|
|
|
|
|
|
|
|
if (unlikely(ret)) {
|
|
|
|
unsigned long last_byte = addr + size - 1;
|
|
|
|
s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
|
|
|
|
|
|
|
|
if (unlikely(ret != (unsigned long)last_shadow ||
|
|
|
|
((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
|
|
|
|
{
|
|
|
|
if (__builtin_constant_p(size)) {
|
|
|
|
switch (size) {
|
|
|
|
case 1:
|
|
|
|
return memory_is_poisoned_1(addr);
|
|
|
|
case 2:
|
|
|
|
return memory_is_poisoned_2(addr);
|
|
|
|
case 4:
|
|
|
|
return memory_is_poisoned_4(addr);
|
|
|
|
case 8:
|
|
|
|
return memory_is_poisoned_8(addr);
|
|
|
|
case 16:
|
|
|
|
return memory_is_poisoned_16(addr);
|
|
|
|
default:
|
|
|
|
BUILD_BUG();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return memory_is_poisoned_n(addr, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static __always_inline void check_memory_region(unsigned long addr,
|
|
|
|
size_t size, bool write)
|
|
|
|
{
|
|
|
|
struct kasan_access_info info;
|
|
|
|
|
|
|
|
if (unlikely(size == 0))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (unlikely((void *)addr <
|
|
|
|
kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
|
|
|
|
info.access_addr = (void *)addr;
|
|
|
|
info.access_size = size;
|
|
|
|
info.is_write = write;
|
|
|
|
info.ip = _RET_IP_;
|
|
|
|
kasan_report_user_access(&info);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (likely(!memory_is_poisoned(addr, size)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
kasan_report(addr, size, write, _RET_IP_);
|
|
|
|
}
|
|
|
|
|
2015-02-14 06:39:56 +08:00
|
|
|
void __asan_loadN(unsigned long addr, size_t size);
|
|
|
|
void __asan_storeN(unsigned long addr, size_t size);
|
|
|
|
|
|
|
|
#undef memset
|
|
|
|
void *memset(void *addr, int c, size_t len)
|
|
|
|
{
|
|
|
|
__asan_storeN((unsigned long)addr, len);
|
|
|
|
|
|
|
|
return __memset(addr, c, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
#undef memmove
|
|
|
|
void *memmove(void *dest, const void *src, size_t len)
|
|
|
|
{
|
|
|
|
__asan_loadN((unsigned long)src, len);
|
|
|
|
__asan_storeN((unsigned long)dest, len);
|
|
|
|
|
|
|
|
return __memmove(dest, src, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
#undef memcpy
|
|
|
|
void *memcpy(void *dest, const void *src, size_t len)
|
|
|
|
{
|
|
|
|
__asan_loadN((unsigned long)src, len);
|
|
|
|
__asan_storeN((unsigned long)dest, len);
|
|
|
|
|
|
|
|
return __memcpy(dest, src, len);
|
|
|
|
}
|
|
|
|
|
2015-02-14 06:39:28 +08:00
|
|
|
void kasan_alloc_pages(struct page *page, unsigned int order)
|
|
|
|
{
|
|
|
|
if (likely(!PageHighMem(page)))
|
|
|
|
kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_free_pages(struct page *page, unsigned int order)
|
|
|
|
{
|
|
|
|
if (likely(!PageHighMem(page)))
|
|
|
|
kasan_poison_shadow(page_address(page),
|
|
|
|
PAGE_SIZE << order,
|
|
|
|
KASAN_FREE_PAGE);
|
|
|
|
}
|
|
|
|
|
2015-02-14 06:39:42 +08:00
|
|
|
void kasan_poison_slab(struct page *page)
|
|
|
|
{
|
|
|
|
kasan_poison_shadow(page_address(page),
|
|
|
|
PAGE_SIZE << compound_order(page),
|
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
kasan_unpoison_shadow(object, cache->object_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_poison_object_data(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
kasan_poison_shadow(object,
|
|
|
|
round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
|
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_slab_alloc(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
kasan_kmalloc(cache, object, cache->object_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_slab_free(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
unsigned long size = cache->object_size;
|
|
|
|
unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
/* RCU slabs could be legally used after free within the RCU period */
|
|
|
|
if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
|
|
|
|
return;
|
|
|
|
|
|
|
|
kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
|
|
|
|
{
|
|
|
|
unsigned long redzone_start;
|
|
|
|
unsigned long redzone_end;
|
|
|
|
|
|
|
|
if (unlikely(object == NULL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
redzone_start = round_up((unsigned long)(object + size),
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
redzone_end = round_up((unsigned long)object + cache->object_size,
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(object, size);
|
|
|
|
kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
|
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(kasan_kmalloc);
|
|
|
|
|
|
|
|
void kasan_kmalloc_large(const void *ptr, size_t size)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
unsigned long redzone_start;
|
|
|
|
unsigned long redzone_end;
|
|
|
|
|
|
|
|
if (unlikely(ptr == NULL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
page = virt_to_page(ptr);
|
|
|
|
redzone_start = round_up((unsigned long)(ptr + size),
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(ptr, size);
|
|
|
|
kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
|
|
|
|
KASAN_PAGE_REDZONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_krealloc(const void *object, size_t size)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
if (unlikely(object == ZERO_SIZE_PTR))
|
|
|
|
return;
|
|
|
|
|
|
|
|
page = virt_to_head_page(object);
|
|
|
|
|
|
|
|
if (unlikely(!PageSlab(page)))
|
|
|
|
kasan_kmalloc_large(object, size);
|
|
|
|
else
|
|
|
|
kasan_kmalloc(page->slab_cache, object, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_kfree_large(const void *ptr)
|
|
|
|
{
|
|
|
|
struct page *page = virt_to_page(ptr);
|
|
|
|
|
|
|
|
kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
|
|
|
|
KASAN_FREE_PAGE);
|
|
|
|
}
|
|
|
|
|
2015-02-14 06:40:17 +08:00
|
|
|
int kasan_module_alloc(void *addr, size_t size)
|
|
|
|
{
|
|
|
|
void *ret;
|
|
|
|
size_t shadow_size;
|
|
|
|
unsigned long shadow_start;
|
|
|
|
|
|
|
|
shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
|
|
|
|
shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
|
|
|
|
PAGE_SIZE);
|
|
|
|
|
|
|
|
if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
|
|
|
|
shadow_start + shadow_size,
|
|
|
|
GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
|
|
|
|
PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
|
|
|
|
__builtin_return_address(0));
|
|
|
|
return ret ? 0 : -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_module_free(void *addr)
|
|
|
|
{
|
|
|
|
vfree(kasan_mem_to_shadow(addr));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void register_global(struct kasan_global *global)
|
|
|
|
{
|
|
|
|
size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(global->beg, global->size);
|
|
|
|
|
|
|
|
kasan_poison_shadow(global->beg + aligned_size,
|
|
|
|
global->size_with_redzone - aligned_size,
|
|
|
|
KASAN_GLOBAL_REDZONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void __asan_register_globals(struct kasan_global *globals, size_t size)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < size; i++)
|
|
|
|
register_global(&globals[i]);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__asan_register_globals);
|
|
|
|
|
|
|
|
void __asan_unregister_globals(struct kasan_global *globals, size_t size)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__asan_unregister_globals);
|
|
|
|
|
kasan: add kernel address sanitizer infrastructure
Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
provides fast and comprehensive solution for finding use-after-free and
out-of-bounds bugs.
KASAN uses compile-time instrumentation for checking every memory access,
therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
putting symbol aliases into the wrong section, which breaks kasan
instrumentation of globals.
This patch only adds infrastructure for kernel address sanitizer. It's
not available for use yet. The idea and some code was borrowed from [1].
Basic idea:
The main idea of KASAN is to use shadow memory to record whether each byte
of memory is safe to access or not, and use compiler's instrumentation to
check the shadow memory on each memory access.
Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
memory and uses direct mapping with a scale and offset to translate a
memory address to its corresponding shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes
of the corresponding memory region are valid for access; k (1 <= k <= 7)
means that the first k bytes are valid for access, and other (8 - k) bytes
are not; Any negative value indicates that the entire 8-bytes are
inaccessible. Different negative values used to distinguish between
different kinds of inaccessible memory (redzones, freed memory) (see
mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr),
__asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking corresponding shadow memory. If access is not valid an error
printed.
Historical background of the address sanitizer from Dmitry Vyukov:
"We've developed the set of tools, AddressSanitizer (Asan),
ThreadSanitizer and MemorySanitizer, for user space. We actively use
them for testing inside of Google (continuous testing, fuzzing,
running prod services). To date the tools have found more than 10'000
scary bugs in Chromium, Google internal codebase and various
open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
lots of others): [2] [3] [4].
The tools are part of both gcc and clang compilers.
We have not yet done massive testing under the Kernel AddressSanitizer
(it's kind of chicken and egg problem, you need it to be upstream to
start applying it extensively). To date it has found about 50 bugs.
Bugs that we've found in upstream kernel are listed in [5].
We've also found ~20 bugs in out internal version of the kernel. Also
people from Samsung and Oracle have found some.
[...]
As others noted, the main feature of AddressSanitizer is its
performance due to inline compiler instrumentation and simple linear
shadow memory. User-space Asan has ~2x slowdown on computational
programs and ~2x memory consumption increase. Taking into account that
kernel usually consumes only small fraction of CPU and memory when
running real user-space programs, I would expect that kernel Asan will
have ~10-30% slowdown and similar memory consumption increase (when we
finish all tuning).
I agree that Asan can well replace kmemcheck. We have plans to start
working on Kernel MemorySanitizer that finds uses of unitialized
memory. Asan+Msan will provide feature-parity with kmemcheck. As
others noted, Asan will unlikely replace debug slab and pagealloc that
can be enabled at runtime. Asan uses compiler instrumentation, so even
if it is disabled, it still incurs visible overheads.
Asan technology is easily portable to other architectures. Compiler
instrumentation is fully portable. Runtime has some arch-dependent
parts like shadow mapping and atomic operation interception. They are
relatively easy to port."
Comparison with other debugging features:
========================================
KMEMCHECK:
- KASan can do almost everything that kmemcheck can. KASan uses
compile-time instrumentation, which makes it significantly faster than
kmemcheck. The only advantage of kmemcheck over KASan is detection of
uninitialized memory reads.
Some brief performance testing showed that kasan could be
x500-x600 times faster than kmemcheck:
$ netperf -l 30
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
no debug: 87380 16384 16384 30.00 41624.72
kasan inline: 87380 16384 16384 30.00 12870.54
kasan outline: 87380 16384 16384 30.00 10586.39
kmemcheck: 87380 16384 16384 30.03 20.23
- Also kmemcheck couldn't work on several CPUs. It always sets
number of CPUs to 1. KASan doesn't have such limitation.
DEBUG_PAGEALLOC:
- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
granularity level, so it able to find more bugs.
SLUB_DEBUG (poisoning, redzones):
- SLUB_DEBUG has lower overhead than KASan.
- SLUB_DEBUG in most cases are not able to detect bad reads,
KASan able to detect both reads and writes.
- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
bugs only on allocation/freeing of object. KASan catch
bugs right before it will happen, so we always know exact
place of first bad read/write.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
[2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
[3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 06:39:17 +08:00
|
|
|
#define DEFINE_ASAN_LOAD_STORE(size) \
|
|
|
|
void __asan_load##size(unsigned long addr) \
|
|
|
|
{ \
|
|
|
|
check_memory_region(addr, size, false); \
|
|
|
|
} \
|
|
|
|
EXPORT_SYMBOL(__asan_load##size); \
|
|
|
|
__alias(__asan_load##size) \
|
|
|
|
void __asan_load##size##_noabort(unsigned long); \
|
|
|
|
EXPORT_SYMBOL(__asan_load##size##_noabort); \
|
|
|
|
void __asan_store##size(unsigned long addr) \
|
|
|
|
{ \
|
|
|
|
check_memory_region(addr, size, true); \
|
|
|
|
} \
|
|
|
|
EXPORT_SYMBOL(__asan_store##size); \
|
|
|
|
__alias(__asan_store##size) \
|
|
|
|
void __asan_store##size##_noabort(unsigned long); \
|
|
|
|
EXPORT_SYMBOL(__asan_store##size##_noabort)
|
|
|
|
|
|
|
|
DEFINE_ASAN_LOAD_STORE(1);
|
|
|
|
DEFINE_ASAN_LOAD_STORE(2);
|
|
|
|
DEFINE_ASAN_LOAD_STORE(4);
|
|
|
|
DEFINE_ASAN_LOAD_STORE(8);
|
|
|
|
DEFINE_ASAN_LOAD_STORE(16);
|
|
|
|
|
|
|
|
void __asan_loadN(unsigned long addr, size_t size)
|
|
|
|
{
|
|
|
|
check_memory_region(addr, size, false);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__asan_loadN);
|
|
|
|
|
|
|
|
__alias(__asan_loadN)
|
|
|
|
void __asan_loadN_noabort(unsigned long, size_t);
|
|
|
|
EXPORT_SYMBOL(__asan_loadN_noabort);
|
|
|
|
|
|
|
|
void __asan_storeN(unsigned long addr, size_t size)
|
|
|
|
{
|
|
|
|
check_memory_region(addr, size, true);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__asan_storeN);
|
|
|
|
|
|
|
|
__alias(__asan_storeN)
|
|
|
|
void __asan_storeN_noabort(unsigned long, size_t);
|
|
|
|
EXPORT_SYMBOL(__asan_storeN_noabort);
|
|
|
|
|
|
|
|
/* to shut up compiler complaints */
|
|
|
|
void __asan_handle_no_return(void) {}
|
|
|
|
EXPORT_SYMBOL(__asan_handle_no_return);
|
2015-02-14 06:39:21 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_MEMORY_HOTPLUG
|
|
|
|
static int kasan_mem_notifier(struct notifier_block *nb,
|
|
|
|
unsigned long action, void *data)
|
|
|
|
{
|
|
|
|
return (action == MEM_GOING_ONLINE) ? NOTIFY_BAD : NOTIFY_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __init kasan_memhotplug_init(void)
|
|
|
|
{
|
|
|
|
pr_err("WARNING: KASan doesn't support memory hot-add\n");
|
|
|
|
pr_err("Memory hot-add will be disabled\n");
|
|
|
|
|
|
|
|
hotplug_memory_notifier(kasan_mem_notifier, 0);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
module_init(kasan_memhotplug_init);
|
|
|
|
#endif
|