2018-12-28 16:31:14 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2018-12-28 16:29:45 +08:00
|
|
|
/*
|
|
|
|
* This file contains common generic and tag-based KASAN code.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2014 Samsung Electronics Co., Ltd.
|
|
|
|
* Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
|
|
|
|
*
|
|
|
|
* Some code borrowed from https://github.com/xairy/kasan-prototype by
|
|
|
|
* Andrey Konovalov <andreyknvl@gmail.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/export.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/kasan.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/kmemleak.h>
|
|
|
|
#include <linux/linkage.h>
|
|
|
|
#include <linux/memblock.h>
|
|
|
|
#include <linux/memory.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/printk.h>
|
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/sched/task_stack.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/stacktrace.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/bug.h>
|
|
|
|
|
2019-12-05 08:49:43 +08:00
|
|
|
#include <asm/cacheflush.h>
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
#include "kasan.h"
|
|
|
|
#include "../slab.h"
|
|
|
|
|
2020-08-07 14:24:35 +08:00
|
|
|
depot_stack_handle_t kasan_save_stack(gfp_t flags)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
|
|
|
unsigned long entries[KASAN_STACK_DEPTH];
|
2019-04-25 17:45:02 +08:00
|
|
|
unsigned int nr_entries;
|
2018-12-28 16:29:45 +08:00
|
|
|
|
2019-04-25 17:45:02 +08:00
|
|
|
nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
|
|
|
|
nr_entries = filter_irq_stacks(entries, nr_entries);
|
|
|
|
return stack_depot_save(entries, nr_entries, flags);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_track(struct kasan_track *track, gfp_t flags)
|
|
|
|
{
|
|
|
|
track->pid = current->pid;
|
2020-08-07 14:24:35 +08:00
|
|
|
track->stack = kasan_save_stack(flags);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_enable_current(void)
|
|
|
|
{
|
|
|
|
current->kasan_depth++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_disable_current(void)
|
|
|
|
{
|
|
|
|
current->kasan_depth--;
|
|
|
|
}
|
|
|
|
|
2019-07-12 11:54:07 +08:00
|
|
|
bool __kasan_check_read(const volatile void *p, unsigned int size)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
2019-07-12 11:54:07 +08:00
|
|
|
return check_memory_region((unsigned long)p, size, false, _RET_IP_);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
2019-07-12 11:54:03 +08:00
|
|
|
EXPORT_SYMBOL(__kasan_check_read);
|
2018-12-28 16:29:45 +08:00
|
|
|
|
2019-07-12 11:54:07 +08:00
|
|
|
bool __kasan_check_write(const volatile void *p, unsigned int size)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
2019-07-12 11:54:07 +08:00
|
|
|
return check_memory_region((unsigned long)p, size, true, _RET_IP_);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
2019-07-12 11:54:03 +08:00
|
|
|
EXPORT_SYMBOL(__kasan_check_write);
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
#undef memset
|
|
|
|
void *memset(void *addr, int c, size_t len)
|
|
|
|
{
|
2020-04-02 12:09:37 +08:00
|
|
|
if (!check_memory_region((unsigned long)addr, len, true, _RET_IP_))
|
|
|
|
return NULL;
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
return __memset(addr, c, len);
|
|
|
|
}
|
|
|
|
|
2019-10-28 10:40:59 +08:00
|
|
|
#ifdef __HAVE_ARCH_MEMMOVE
|
2018-12-28 16:29:45 +08:00
|
|
|
#undef memmove
|
|
|
|
void *memmove(void *dest, const void *src, size_t len)
|
|
|
|
{
|
2020-04-02 12:09:37 +08:00
|
|
|
if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) ||
|
|
|
|
!check_memory_region((unsigned long)dest, len, true, _RET_IP_))
|
|
|
|
return NULL;
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
return __memmove(dest, src, len);
|
|
|
|
}
|
2019-10-28 10:40:59 +08:00
|
|
|
#endif
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
#undef memcpy
|
|
|
|
void *memcpy(void *dest, const void *src, size_t len)
|
|
|
|
{
|
2020-04-02 12:09:37 +08:00
|
|
|
if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) ||
|
|
|
|
!check_memory_region((unsigned long)dest, len, true, _RET_IP_))
|
|
|
|
return NULL;
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
return __memcpy(dest, src, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Poisons the shadow memory for 'size' bytes starting from 'addr'.
|
|
|
|
* Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
|
|
|
|
*/
|
|
|
|
void kasan_poison_shadow(const void *address, size_t size, u8 value)
|
|
|
|
{
|
|
|
|
void *shadow_start, *shadow_end;
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
/*
|
|
|
|
* Perform shadow offset calculation based on untagged address, as
|
|
|
|
* some of the callers (e.g. kasan_poison_object_data) pass tagged
|
|
|
|
* addresses to this function.
|
|
|
|
*/
|
|
|
|
address = reset_tag(address);
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
shadow_start = kasan_mem_to_shadow(address);
|
|
|
|
shadow_end = kasan_mem_to_shadow(address + size);
|
|
|
|
|
|
|
|
__memset(shadow_start, value, shadow_end - shadow_start);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_unpoison_shadow(const void *address, size_t size)
|
|
|
|
{
|
2018-12-28 16:30:50 +08:00
|
|
|
u8 tag = get_tag(address);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Perform shadow offset calculation based on untagged address, as
|
|
|
|
* some of the callers (e.g. kasan_unpoison_object_data) pass tagged
|
|
|
|
* addresses to this function.
|
|
|
|
*/
|
|
|
|
address = reset_tag(address);
|
|
|
|
|
|
|
|
kasan_poison_shadow(address, size, tag);
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
if (size & KASAN_SHADOW_MASK) {
|
|
|
|
u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
|
2018-12-28 16:30:50 +08:00
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
|
|
|
|
*shadow = tag;
|
|
|
|
else
|
|
|
|
*shadow = size & KASAN_SHADOW_MASK;
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
|
|
|
|
{
|
|
|
|
void *base = task_stack_page(task);
|
|
|
|
size_t size = sp - base;
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(base, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unpoison the entire stack for a task. */
|
|
|
|
void kasan_unpoison_task_stack(struct task_struct *task)
|
|
|
|
{
|
|
|
|
__kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unpoison the stack for the current task beyond a watermark sp value. */
|
|
|
|
asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Calculate the task stack base address. Avoid using 'current'
|
|
|
|
* because this function is called by early resume code which hasn't
|
|
|
|
* yet set up the percpu register (%gs).
|
|
|
|
*/
|
|
|
|
void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(base, watermark - base);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Clear all poison for the region between the current SP and a provided
|
|
|
|
* watermark value, as is sometimes required prior to hand-crafted asm function
|
|
|
|
* returns in the middle of functions.
|
|
|
|
*/
|
|
|
|
void kasan_unpoison_stack_above_sp_to(const void *watermark)
|
|
|
|
{
|
|
|
|
const void *sp = __builtin_frame_address(0);
|
|
|
|
size_t size = watermark - sp;
|
|
|
|
|
|
|
|
if (WARN_ON(sp > watermark))
|
|
|
|
return;
|
|
|
|
kasan_unpoison_shadow(sp, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_alloc_pages(struct page *page, unsigned int order)
|
|
|
|
{
|
2018-12-28 16:30:57 +08:00
|
|
|
u8 tag;
|
|
|
|
unsigned long i;
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
if (unlikely(PageHighMem(page)))
|
|
|
|
return;
|
2018-12-28 16:30:57 +08:00
|
|
|
|
|
|
|
tag = random_tag();
|
|
|
|
for (i = 0; i < (1 << order); i++)
|
|
|
|
page_kasan_tag_set(page + i, tag);
|
2018-12-28 16:30:50 +08:00
|
|
|
kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_free_pages(struct page *page, unsigned int order)
|
|
|
|
{
|
|
|
|
if (likely(!PageHighMem(page)))
|
|
|
|
kasan_poison_shadow(page_address(page),
|
|
|
|
PAGE_SIZE << order,
|
|
|
|
KASAN_FREE_PAGE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
|
|
|
|
* For larger allocations larger redzones are used.
|
|
|
|
*/
|
|
|
|
static inline unsigned int optimal_redzone(unsigned int object_size)
|
|
|
|
{
|
2018-12-28 16:30:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
|
|
|
|
return 0;
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
return
|
|
|
|
object_size <= 64 - 16 ? 16 :
|
|
|
|
object_size <= 128 - 32 ? 32 :
|
|
|
|
object_size <= 512 - 64 ? 64 :
|
|
|
|
object_size <= 4096 - 128 ? 128 :
|
|
|
|
object_size <= (1 << 14) - 256 ? 256 :
|
|
|
|
object_size <= (1 << 15) - 512 ? 512 :
|
|
|
|
object_size <= (1 << 16) - 1024 ? 1024 : 2048;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
|
|
|
|
slab_flags_t *flags)
|
|
|
|
{
|
|
|
|
unsigned int orig_size = *size;
|
2018-12-28 16:30:50 +08:00
|
|
|
unsigned int redzone_size;
|
2018-12-28 16:29:45 +08:00
|
|
|
int redzone_adjust;
|
|
|
|
|
|
|
|
/* Add alloc meta. */
|
|
|
|
cache->kasan_info.alloc_meta_offset = *size;
|
|
|
|
*size += sizeof(struct kasan_alloc_meta);
|
|
|
|
|
|
|
|
/* Add free meta. */
|
2018-12-28 16:30:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
|
|
|
|
(cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
|
|
|
|
cache->object_size < sizeof(struct kasan_free_meta))) {
|
2018-12-28 16:29:45 +08:00
|
|
|
cache->kasan_info.free_meta_offset = *size;
|
|
|
|
*size += sizeof(struct kasan_free_meta);
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
redzone_size = optimal_redzone(cache->object_size);
|
|
|
|
redzone_adjust = redzone_size - (*size - cache->object_size);
|
2018-12-28 16:29:45 +08:00
|
|
|
if (redzone_adjust > 0)
|
|
|
|
*size += redzone_adjust;
|
|
|
|
|
|
|
|
*size = min_t(unsigned int, KMALLOC_MAX_SIZE,
|
2018-12-28 16:30:50 +08:00
|
|
|
max(*size, cache->object_size + redzone_size));
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the metadata doesn't fit, don't enable KASAN at all.
|
|
|
|
*/
|
|
|
|
if (*size <= cache->kasan_info.alloc_meta_offset ||
|
|
|
|
*size <= cache->kasan_info.free_meta_offset) {
|
|
|
|
cache->kasan_info.alloc_meta_offset = 0;
|
|
|
|
cache->kasan_info.free_meta_offset = 0;
|
|
|
|
*size = orig_size;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
*flags |= SLAB_KASAN;
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t kasan_metadata_size(struct kmem_cache *cache)
|
|
|
|
{
|
|
|
|
return (cache->kasan_info.alloc_meta_offset ?
|
|
|
|
sizeof(struct kasan_alloc_meta) : 0) +
|
|
|
|
(cache->kasan_info.free_meta_offset ?
|
|
|
|
sizeof(struct kasan_free_meta) : 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
|
|
|
|
const void *object)
|
|
|
|
{
|
|
|
|
return (void *)object + cache->kasan_info.alloc_meta_offset;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
|
|
|
|
const void *object)
|
|
|
|
{
|
|
|
|
BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
|
|
|
|
return (void *)object + cache->kasan_info.free_meta_offset;
|
|
|
|
}
|
|
|
|
|
2019-09-24 06:34:13 +08:00
|
|
|
|
|
|
|
static void kasan_set_free_info(struct kmem_cache *cache,
|
|
|
|
void *object, u8 tag)
|
|
|
|
{
|
|
|
|
struct kasan_alloc_meta *alloc_meta;
|
|
|
|
u8 idx = 0;
|
|
|
|
|
|
|
|
alloc_meta = get_alloc_info(cache, object);
|
|
|
|
|
|
|
|
#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
|
|
|
|
idx = alloc_meta->free_track_idx;
|
|
|
|
alloc_meta->free_pointer_tag[idx] = tag;
|
|
|
|
alloc_meta->free_track_idx = (idx + 1) % KASAN_NR_FREE_STACKS;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
set_track(&alloc_meta->free_track[idx], GFP_NOWAIT);
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
void kasan_poison_slab(struct page *page)
|
|
|
|
{
|
2018-12-28 16:30:57 +08:00
|
|
|
unsigned long i;
|
|
|
|
|
2019-09-24 06:34:30 +08:00
|
|
|
for (i = 0; i < compound_nr(page); i++)
|
2018-12-28 16:30:57 +08:00
|
|
|
page_kasan_tag_reset(page + i);
|
2019-09-24 06:34:25 +08:00
|
|
|
kasan_poison_shadow(page_address(page), page_size(page),
|
2018-12-28 16:29:45 +08:00
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
kasan_unpoison_shadow(object, cache->object_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_poison_object_data(struct kmem_cache *cache, void *object)
|
|
|
|
{
|
|
|
|
kasan_poison_shadow(object,
|
|
|
|
round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
|
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
/*
|
2019-01-09 07:23:18 +08:00
|
|
|
* This function assigns a tag to an object considering the following:
|
|
|
|
* 1. A cache might have a constructor, which might save a pointer to a slab
|
|
|
|
* object somewhere (e.g. in the object itself). We preassign a tag for
|
|
|
|
* each object in caches with constructors during slab creation and reuse
|
|
|
|
* the same tag each time a particular object is allocated.
|
|
|
|
* 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be
|
|
|
|
* accessed after being freed. We preassign tags for objects in these
|
|
|
|
* caches as well.
|
|
|
|
* 3. For SLAB allocator we can't preassign tags randomly since the freelist
|
|
|
|
* is stored as an array of indexes instead of a linked list. Assign tags
|
|
|
|
* based on objects indexes, so that objects that are next to each other
|
|
|
|
* get different tags.
|
2018-12-28 16:30:50 +08:00
|
|
|
*/
|
2019-01-09 07:23:18 +08:00
|
|
|
static u8 assign_tag(struct kmem_cache *cache, const void *object,
|
2019-02-21 14:19:01 +08:00
|
|
|
bool init, bool keep_tag)
|
2018-12-28 16:30:50 +08:00
|
|
|
{
|
2019-02-21 14:19:01 +08:00
|
|
|
/*
|
|
|
|
* 1. When an object is kmalloc()'ed, two hooks are called:
|
|
|
|
* kasan_slab_alloc() and kasan_kmalloc(). We assign the
|
|
|
|
* tag only in the first one.
|
|
|
|
* 2. We reuse the same tag for krealloc'ed objects.
|
|
|
|
*/
|
|
|
|
if (keep_tag)
|
2019-01-09 07:23:18 +08:00
|
|
|
return get_tag(object);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU
|
|
|
|
* set, assign a tag when the object is being allocated (init == false).
|
|
|
|
*/
|
2018-12-28 16:30:50 +08:00
|
|
|
if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))
|
2019-01-09 07:23:18 +08:00
|
|
|
return init ? KASAN_TAG_KERNEL : random_tag();
|
2018-12-28 16:30:50 +08:00
|
|
|
|
2019-01-09 07:23:18 +08:00
|
|
|
/* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
|
2018-12-28 16:30:50 +08:00
|
|
|
#ifdef CONFIG_SLAB
|
2019-01-09 07:23:18 +08:00
|
|
|
/* For SLAB assign tags based on the object index in the freelist. */
|
2018-12-28 16:30:50 +08:00
|
|
|
return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
|
|
|
|
#else
|
2019-01-09 07:23:18 +08:00
|
|
|
/*
|
|
|
|
* For SLUB assign a random tag during slab creation, otherwise reuse
|
|
|
|
* the already assigned tag.
|
|
|
|
*/
|
|
|
|
return init ? random_tag() : get_tag(object);
|
2018-12-28 16:30:50 +08:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:31:01 +08:00
|
|
|
void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
|
|
|
|
const void *object)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
|
|
|
struct kasan_alloc_meta *alloc_info;
|
|
|
|
|
|
|
|
if (!(cache->flags & SLAB_KASAN))
|
|
|
|
return (void *)object;
|
|
|
|
|
|
|
|
alloc_info = get_alloc_info(cache, object);
|
|
|
|
__memset(alloc_info, 0, sizeof(*alloc_info));
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
|
2019-01-09 07:23:18 +08:00
|
|
|
object = set_tag(object,
|
|
|
|
assign_tag(cache, object, true, false));
|
2018-12-28 16:30:50 +08:00
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
return (void *)object;
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
|
|
|
|
{
|
|
|
|
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
|
|
|
|
return shadow_byte < 0 ||
|
|
|
|
shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
|
2019-08-25 08:55:09 +08:00
|
|
|
|
|
|
|
/* else CONFIG_KASAN_SW_TAGS: */
|
|
|
|
if ((u8)shadow_byte == KASAN_TAG_INVALID)
|
|
|
|
return true;
|
|
|
|
if ((tag != KASAN_TAG_KERNEL) && (tag != (u8)shadow_byte))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
2018-12-28 16:30:50 +08:00
|
|
|
}
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
|
|
|
|
unsigned long ip, bool quarantine)
|
|
|
|
{
|
|
|
|
s8 shadow_byte;
|
2018-12-28 16:30:50 +08:00
|
|
|
u8 tag;
|
|
|
|
void *tagged_object;
|
2018-12-28 16:29:45 +08:00
|
|
|
unsigned long rounded_up_size;
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
tag = get_tag(object);
|
|
|
|
tagged_object = object;
|
|
|
|
object = reset_tag(object);
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
|
|
|
|
object)) {
|
2018-12-28 16:30:50 +08:00
|
|
|
kasan_report_invalid_free(tagged_object, ip);
|
2018-12-28 16:29:45 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* RCU slabs could be legally used after free within the RCU period */
|
|
|
|
if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
|
2018-12-28 16:30:50 +08:00
|
|
|
if (shadow_invalid(tag, shadow_byte)) {
|
|
|
|
kasan_report_invalid_free(tagged_object, ip);
|
2018-12-28 16:29:45 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
|
|
|
|
unlikely(!(cache->flags & SLAB_KASAN)))
|
2018-12-28 16:29:45 +08:00
|
|
|
return false;
|
|
|
|
|
2019-09-24 06:34:13 +08:00
|
|
|
kasan_set_free_info(cache, object, tag);
|
|
|
|
|
2018-12-28 16:29:45 +08:00
|
|
|
quarantine_put(get_free_info(cache, object), cache);
|
2018-12-28 16:30:50 +08:00
|
|
|
|
|
|
|
return IS_ENABLED(CONFIG_KASAN_GENERIC);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
|
|
|
|
{
|
|
|
|
return __kasan_slab_free(cache, object, ip, true);
|
|
|
|
}
|
|
|
|
|
2019-01-09 07:23:18 +08:00
|
|
|
static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
|
2019-02-21 14:19:01 +08:00
|
|
|
size_t size, gfp_t flags, bool keep_tag)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
|
|
|
unsigned long redzone_start;
|
|
|
|
unsigned long redzone_end;
|
2019-06-01 13:30:42 +08:00
|
|
|
u8 tag = 0xff;
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
if (gfpflags_allow_blocking(flags))
|
|
|
|
quarantine_reduce();
|
|
|
|
|
|
|
|
if (unlikely(object == NULL))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
redzone_start = round_up((unsigned long)(object + size),
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
redzone_end = round_up((unsigned long)object + cache->object_size,
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
|
2019-02-21 14:19:01 +08:00
|
|
|
tag = assign_tag(cache, object, false, keep_tag);
|
2018-12-28 16:30:50 +08:00
|
|
|
|
|
|
|
/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
|
|
|
|
kasan_unpoison_shadow(set_tag(object, tag), size);
|
2018-12-28 16:29:45 +08:00
|
|
|
kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
|
|
|
|
KASAN_KMALLOC_REDZONE);
|
|
|
|
|
|
|
|
if (cache->flags & SLAB_KASAN)
|
|
|
|
set_track(&get_alloc_info(cache, object)->alloc_track, flags);
|
|
|
|
|
2018-12-28 16:30:50 +08:00
|
|
|
return set_tag(object, tag);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
2019-01-09 07:23:18 +08:00
|
|
|
|
2019-02-21 14:19:01 +08:00
|
|
|
void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
|
|
|
|
gfp_t flags)
|
|
|
|
{
|
|
|
|
return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
|
|
|
|
}
|
|
|
|
|
2019-01-09 07:23:18 +08:00
|
|
|
void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
|
|
|
|
size_t size, gfp_t flags)
|
|
|
|
{
|
2019-02-21 14:19:01 +08:00
|
|
|
return __kasan_kmalloc(cache, object, size, flags, true);
|
2019-01-09 07:23:18 +08:00
|
|
|
}
|
2018-12-28 16:29:45 +08:00
|
|
|
EXPORT_SYMBOL(kasan_kmalloc);
|
|
|
|
|
2018-12-28 16:31:01 +08:00
|
|
|
void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
|
|
|
|
gfp_t flags)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
unsigned long redzone_start;
|
|
|
|
unsigned long redzone_end;
|
|
|
|
|
|
|
|
if (gfpflags_allow_blocking(flags))
|
|
|
|
quarantine_reduce();
|
|
|
|
|
|
|
|
if (unlikely(ptr == NULL))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
page = virt_to_page(ptr);
|
|
|
|
redzone_start = round_up((unsigned long)(ptr + size),
|
|
|
|
KASAN_SHADOW_SCALE_SIZE);
|
2019-09-24 06:34:25 +08:00
|
|
|
redzone_end = (unsigned long)ptr + page_size(page);
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
kasan_unpoison_shadow(ptr, size);
|
|
|
|
kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
|
|
|
|
KASAN_PAGE_REDZONE);
|
|
|
|
|
|
|
|
return (void *)ptr;
|
|
|
|
}
|
|
|
|
|
2018-12-28 16:31:01 +08:00
|
|
|
void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
|
2018-12-28 16:29:45 +08:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
if (unlikely(object == ZERO_SIZE_PTR))
|
|
|
|
return (void *)object;
|
|
|
|
|
|
|
|
page = virt_to_head_page(object);
|
|
|
|
|
|
|
|
if (unlikely(!PageSlab(page)))
|
|
|
|
return kasan_kmalloc_large(object, size, flags);
|
|
|
|
else
|
2019-01-09 07:23:18 +08:00
|
|
|
return __kasan_kmalloc(page->slab_cache, object, size,
|
|
|
|
flags, true);
|
2018-12-28 16:29:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_poison_kfree(void *ptr, unsigned long ip)
|
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
page = virt_to_head_page(ptr);
|
|
|
|
|
|
|
|
if (unlikely(!PageSlab(page))) {
|
2018-12-28 16:30:57 +08:00
|
|
|
if (ptr != page_address(page)) {
|
2018-12-28 16:29:45 +08:00
|
|
|
kasan_report_invalid_free(ptr, ip);
|
|
|
|
return;
|
|
|
|
}
|
2019-09-24 06:34:25 +08:00
|
|
|
kasan_poison_shadow(ptr, page_size(page), KASAN_FREE_PAGE);
|
2018-12-28 16:29:45 +08:00
|
|
|
} else {
|
|
|
|
__kasan_slab_free(page->slab_cache, ptr, ip, false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_kfree_large(void *ptr, unsigned long ip)
|
|
|
|
{
|
2018-12-28 16:30:57 +08:00
|
|
|
if (ptr != page_address(virt_to_head_page(ptr)))
|
2018-12-28 16:29:45 +08:00
|
|
|
kasan_report_invalid_free(ptr, ip);
|
|
|
|
/* The object will be poisoned by page_alloc. */
|
|
|
|
}
|
|
|
|
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
#ifndef CONFIG_KASAN_VMALLOC
|
2018-12-28 16:29:45 +08:00
|
|
|
int kasan_module_alloc(void *addr, size_t size)
|
|
|
|
{
|
|
|
|
void *ret;
|
|
|
|
size_t scaled_size;
|
|
|
|
size_t shadow_size;
|
|
|
|
unsigned long shadow_start;
|
|
|
|
|
|
|
|
shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
|
|
|
|
scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
|
|
|
|
shadow_size = round_up(scaled_size, PAGE_SIZE);
|
|
|
|
|
|
|
|
if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
|
|
|
|
shadow_start + shadow_size,
|
2018-12-28 16:30:09 +08:00
|
|
|
GFP_KERNEL,
|
2018-12-28 16:29:45 +08:00
|
|
|
PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
|
|
|
|
__builtin_return_address(0));
|
|
|
|
|
|
|
|
if (ret) {
|
2018-12-28 16:30:09 +08:00
|
|
|
__memset(ret, KASAN_SHADOW_INIT, shadow_size);
|
2018-12-28 16:29:45 +08:00
|
|
|
find_vm_area(addr)->flags |= VM_KASAN;
|
|
|
|
kmemleak_ignore(ret);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
void kasan_free_shadow(const struct vm_struct *vm)
|
|
|
|
{
|
|
|
|
if (vm->flags & VM_KASAN)
|
|
|
|
vfree(kasan_mem_to_shadow(vm->addr));
|
|
|
|
}
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
#endif
|
2018-12-28 16:29:45 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_MEMORY_HOTPLUG
|
|
|
|
static bool shadow_mapped(unsigned long addr)
|
|
|
|
{
|
|
|
|
pgd_t *pgd = pgd_offset_k(addr);
|
|
|
|
p4d_t *p4d;
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
|
|
|
|
|
|
|
if (pgd_none(*pgd))
|
|
|
|
return false;
|
|
|
|
p4d = p4d_offset(pgd, addr);
|
|
|
|
if (p4d_none(*p4d))
|
|
|
|
return false;
|
|
|
|
pud = pud_offset(p4d, addr);
|
|
|
|
if (pud_none(*pud))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't use pud_large() or pud_huge(), the first one is
|
|
|
|
* arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse
|
|
|
|
* pud_bad(), if pud is bad then it's bad because it's huge.
|
|
|
|
*/
|
|
|
|
if (pud_bad(*pud))
|
|
|
|
return true;
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
if (pmd_none(*pmd))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (pmd_bad(*pmd))
|
|
|
|
return true;
|
|
|
|
pte = pte_offset_kernel(pmd, addr);
|
|
|
|
return !pte_none(*pte);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __meminit kasan_mem_notifier(struct notifier_block *nb,
|
|
|
|
unsigned long action, void *data)
|
|
|
|
{
|
|
|
|
struct memory_notify *mem_data = data;
|
|
|
|
unsigned long nr_shadow_pages, start_kaddr, shadow_start;
|
|
|
|
unsigned long shadow_end, shadow_size;
|
|
|
|
|
|
|
|
nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT;
|
|
|
|
start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn);
|
|
|
|
shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr);
|
|
|
|
shadow_size = nr_shadow_pages << PAGE_SHIFT;
|
|
|
|
shadow_end = shadow_start + shadow_size;
|
|
|
|
|
|
|
|
if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) ||
|
|
|
|
WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)))
|
|
|
|
return NOTIFY_BAD;
|
|
|
|
|
|
|
|
switch (action) {
|
|
|
|
case MEM_GOING_ONLINE: {
|
|
|
|
void *ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If shadow is mapped already than it must have been mapped
|
|
|
|
* during the boot. This could happen if we onlining previously
|
|
|
|
* offlined memory.
|
|
|
|
*/
|
|
|
|
if (shadow_mapped(shadow_start))
|
|
|
|
return NOTIFY_OK;
|
|
|
|
|
|
|
|
ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start,
|
|
|
|
shadow_end, GFP_KERNEL,
|
|
|
|
PAGE_KERNEL, VM_NO_GUARD,
|
|
|
|
pfn_to_nid(mem_data->start_pfn),
|
|
|
|
__builtin_return_address(0));
|
|
|
|
if (!ret)
|
|
|
|
return NOTIFY_BAD;
|
|
|
|
|
|
|
|
kmemleak_ignore(ret);
|
|
|
|
return NOTIFY_OK;
|
|
|
|
}
|
|
|
|
case MEM_CANCEL_ONLINE:
|
|
|
|
case MEM_OFFLINE: {
|
|
|
|
struct vm_struct *vm;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* shadow_start was either mapped during boot by kasan_init()
|
|
|
|
* or during memory online by __vmalloc_node_range().
|
|
|
|
* In the latter case we can use vfree() to free shadow.
|
|
|
|
* Non-NULL result of the find_vm_area() will tell us if
|
|
|
|
* that was the second case.
|
|
|
|
*
|
|
|
|
* Currently it's not possible to free shadow mapped
|
|
|
|
* during boot by kasan_init(). It's because the code
|
|
|
|
* to do that hasn't been written yet. So we'll just
|
|
|
|
* leak the memory.
|
|
|
|
*/
|
|
|
|
vm = find_vm_area((void *)shadow_start);
|
|
|
|
if (vm)
|
|
|
|
vfree((void *)shadow_start);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NOTIFY_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __init kasan_memhotplug_init(void)
|
|
|
|
{
|
|
|
|
hotplug_memory_notifier(kasan_mem_notifier, 0);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
core_initcall(kasan_memhotplug_init);
|
|
|
|
#endif
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_KASAN_VMALLOC
|
|
|
|
static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
|
|
|
|
void *unused)
|
|
|
|
{
|
|
|
|
unsigned long page;
|
|
|
|
pte_t pte;
|
|
|
|
|
|
|
|
if (likely(!pte_none(*ptep)))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
page = __get_free_page(GFP_KERNEL);
|
|
|
|
if (!page)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
|
|
|
|
pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
|
|
|
|
|
|
|
|
spin_lock(&init_mm.page_table_lock);
|
|
|
|
if (likely(pte_none(*ptep))) {
|
|
|
|
set_pte_at(&init_mm, addr, ptep, pte);
|
|
|
|
page = 0;
|
|
|
|
}
|
|
|
|
spin_unlock(&init_mm.page_table_lock);
|
|
|
|
if (page)
|
|
|
|
free_page(page);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-18 12:51:38 +08:00
|
|
|
int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
{
|
|
|
|
unsigned long shadow_start, shadow_end;
|
|
|
|
int ret;
|
|
|
|
|
2019-12-18 12:51:38 +08:00
|
|
|
if (!is_vmalloc_or_module_addr((void *)addr))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr);
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
|
2019-12-18 12:51:38 +08:00
|
|
|
shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size);
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
shadow_end = ALIGN(shadow_end, PAGE_SIZE);
|
|
|
|
|
|
|
|
ret = apply_to_page_range(&init_mm, shadow_start,
|
|
|
|
shadow_end - shadow_start,
|
|
|
|
kasan_populate_vmalloc_pte, NULL);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
flush_cache_vmap(shadow_start, shadow_end);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to be careful about inter-cpu effects here. Consider:
|
|
|
|
*
|
|
|
|
* CPU#0 CPU#1
|
|
|
|
* WRITE_ONCE(p, vmalloc(100)); while (x = READ_ONCE(p)) ;
|
|
|
|
* p[99] = 1;
|
|
|
|
*
|
|
|
|
* With compiler instrumentation, that ends up looking like this:
|
|
|
|
*
|
|
|
|
* CPU#0 CPU#1
|
|
|
|
* // vmalloc() allocates memory
|
|
|
|
* // let a = area->addr
|
|
|
|
* // we reach kasan_populate_vmalloc
|
|
|
|
* // and call kasan_unpoison_shadow:
|
|
|
|
* STORE shadow(a), unpoison_val
|
|
|
|
* ...
|
|
|
|
* STORE shadow(a+99), unpoison_val x = LOAD p
|
|
|
|
* // rest of vmalloc process <data dependency>
|
|
|
|
* STORE p, a LOAD shadow(x+99)
|
|
|
|
*
|
|
|
|
* If there is no barrier between the end of unpoisioning the shadow
|
|
|
|
* and the store of the result to p, the stores could be committed
|
|
|
|
* in a different order by CPU#0, and CPU#1 could erroneously observe
|
|
|
|
* poison in the shadow.
|
|
|
|
*
|
|
|
|
* We need some sort of barrier between the stores.
|
|
|
|
*
|
|
|
|
* In the vmalloc() case, this is provided by a smp_wmb() in
|
|
|
|
* clear_vm_uninitialized_flag(). In the per-cpu allocator and in
|
|
|
|
* get_vm_area() and friends, the caller gets shadow allocated but
|
|
|
|
* doesn't have any pages mapped into the virtual address space that
|
|
|
|
* has been reserved. Mapping those pages in will involve taking and
|
|
|
|
* releasing a page-table lock, which will provide the barrier.
|
|
|
|
*/
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Poison the shadow for a vmalloc region. Called as part of the
|
|
|
|
* freeing process at the time the region is freed.
|
|
|
|
*/
|
2019-12-18 12:51:38 +08:00
|
|
|
void kasan_poison_vmalloc(const void *start, unsigned long size)
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
{
|
2019-12-18 12:51:38 +08:00
|
|
|
if (!is_vmalloc_or_module_addr(start))
|
|
|
|
return;
|
|
|
|
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID);
|
|
|
|
}
|
|
|
|
|
2019-12-18 12:51:38 +08:00
|
|
|
void kasan_unpoison_vmalloc(const void *start, unsigned long size)
|
|
|
|
{
|
|
|
|
if (!is_vmalloc_or_module_addr(start))
|
|
|
|
return;
|
|
|
|
|
|
|
|
kasan_unpoison_shadow(start, size);
|
|
|
|
}
|
|
|
|
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
|
|
|
|
void *unused)
|
|
|
|
{
|
|
|
|
unsigned long page;
|
|
|
|
|
|
|
|
page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
|
|
|
|
|
|
|
|
spin_lock(&init_mm.page_table_lock);
|
|
|
|
|
|
|
|
if (likely(!pte_none(*ptep))) {
|
|
|
|
pte_clear(&init_mm, addr, ptep);
|
|
|
|
free_page(page);
|
|
|
|
}
|
|
|
|
spin_unlock(&init_mm.page_table_lock);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Release the backing for the vmalloc region [start, end), which
|
|
|
|
* lies within the free region [free_region_start, free_region_end).
|
|
|
|
*
|
|
|
|
* This can be run lazily, long after the region was freed. It runs
|
|
|
|
* under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap
|
|
|
|
* infrastructure.
|
|
|
|
*
|
|
|
|
* How does this work?
|
|
|
|
* -------------------
|
|
|
|
*
|
|
|
|
* We have a region that is page aligned, labelled as A.
|
|
|
|
* That might not map onto the shadow in a way that is page-aligned:
|
|
|
|
*
|
|
|
|
* start end
|
|
|
|
* v v
|
|
|
|
* |????????|????????|AAAAAAAA|AA....AA|AAAAAAAA|????????| < vmalloc
|
|
|
|
* -------- -------- -------- -------- --------
|
|
|
|
* | | | | |
|
|
|
|
* | | | /-------/ |
|
|
|
|
* \-------\|/------/ |/---------------/
|
|
|
|
* ||| ||
|
|
|
|
* |??AAAAAA|AAAAAAAA|AA??????| < shadow
|
|
|
|
* (1) (2) (3)
|
|
|
|
*
|
|
|
|
* First we align the start upwards and the end downwards, so that the
|
|
|
|
* shadow of the region aligns with shadow page boundaries. In the
|
|
|
|
* example, this gives us the shadow page (2). This is the shadow entirely
|
|
|
|
* covered by this allocation.
|
|
|
|
*
|
|
|
|
* Then we have the tricky bits. We want to know if we can free the
|
|
|
|
* partially covered shadow pages - (1) and (3) in the example. For this,
|
|
|
|
* we are given the start and end of the free region that contains this
|
|
|
|
* allocation. Extending our previous example, we could have:
|
|
|
|
*
|
|
|
|
* free_region_start free_region_end
|
|
|
|
* | start end |
|
|
|
|
* v v v v
|
|
|
|
* |FFFFFFFF|FFFFFFFF|AAAAAAAA|AA....AA|AAAAAAAA|FFFFFFFF| < vmalloc
|
|
|
|
* -------- -------- -------- -------- --------
|
|
|
|
* | | | | |
|
|
|
|
* | | | /-------/ |
|
|
|
|
* \-------\|/------/ |/---------------/
|
|
|
|
* ||| ||
|
|
|
|
* |FFAAAAAA|AAAAAAAA|AAF?????| < shadow
|
|
|
|
* (1) (2) (3)
|
|
|
|
*
|
|
|
|
* Once again, we align the start of the free region up, and the end of
|
|
|
|
* the free region down so that the shadow is page aligned. So we can free
|
|
|
|
* page (1) - we know no allocation currently uses anything in that page,
|
|
|
|
* because all of it is in the vmalloc free region. But we cannot free
|
|
|
|
* page (3), because we can't be sure that the rest of it is unused.
|
|
|
|
*
|
|
|
|
* We only consider pages that contain part of the original region for
|
|
|
|
* freeing: we don't try to free other pages from the free region or we'd
|
|
|
|
* end up trying to free huge chunks of virtual address space.
|
|
|
|
*
|
|
|
|
* Concurrency
|
|
|
|
* -----------
|
|
|
|
*
|
|
|
|
* How do we know that we're not freeing a page that is simultaneously
|
|
|
|
* being used for a fresh allocation in kasan_populate_vmalloc(_pte)?
|
|
|
|
*
|
|
|
|
* We _can_ have kasan_release_vmalloc and kasan_populate_vmalloc running
|
|
|
|
* at the same time. While we run under free_vmap_area_lock, the population
|
|
|
|
* code does not.
|
|
|
|
*
|
|
|
|
* free_vmap_area_lock instead operates to ensure that the larger range
|
|
|
|
* [free_region_start, free_region_end) is safe: because __alloc_vmap_area and
|
|
|
|
* the per-cpu region-finding algorithm both run under free_vmap_area_lock,
|
|
|
|
* no space identified as free will become used while we are running. This
|
|
|
|
* means that so long as we are careful with alignment and only free shadow
|
|
|
|
* pages entirely covered by the free region, we will not run in to any
|
|
|
|
* trouble - any simultaneous allocations will be for disjoint regions.
|
|
|
|
*/
|
|
|
|
void kasan_release_vmalloc(unsigned long start, unsigned long end,
|
|
|
|
unsigned long free_region_start,
|
|
|
|
unsigned long free_region_end)
|
|
|
|
{
|
|
|
|
void *shadow_start, *shadow_end;
|
|
|
|
unsigned long region_start, region_end;
|
kasan: use apply_to_existing_page_range() for releasing vmalloc shadow
kasan_release_vmalloc uses apply_to_page_range to release vmalloc
shadow. Unfortunately, apply_to_page_range can allocate memory to fill
in page table entries, which is not what we want.
Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
apply_to_page_range does allocate memory, we get a sleep in atomic bug:
BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x199/0x216 lib/dump_stack.c:118
___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
__might_sleep+0x95/0x190 kernel/sched/core.c:6753
prepare_alloc_pages mm/page_alloc.c:4681 [inline]
__alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
alloc_pages include/linux/gfp.h:532 [inline]
__get_free_pages+0xc/0x40 mm/page_alloc.c:4786
__pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
__pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
apply_to_pte_range mm/memory.c:2031 [inline]
apply_to_pmd_range mm/memory.c:2068 [inline]
apply_to_pud_range mm/memory.c:2088 [inline]
apply_to_p4d_range mm/memory.c:2108 [inline]
apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
__purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
vm_remove_mappings mm/vmalloc.c:2236 [inline]
__vunmap+0x223/0xa20 mm/vmalloc.c:2299
__vfree+0x3f/0xd0 mm/vmalloc.c:2356
__vmalloc_area_node mm/vmalloc.c:2507 [inline]
__vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
__vmalloc_node mm/vmalloc.c:2607 [inline]
__vmalloc_node_flags mm/vmalloc.c:2621 [inline]
vzalloc+0x6f/0x80 mm/vmalloc.c:2666
alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
alloc_pg_vec net/packet/af_packet.c:4258 [inline]
packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
__sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
__do_sys_setsockopt net/socket.c:2133 [inline]
__se_sys_setsockopt net/socket.c:2130 [inline]
__x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Switch to using the apply_to_existing_page_range() helper instead, which
won't allocate memory.
[akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Daniel Axtens <dja@axtens.net>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-18 12:51:46 +08:00
|
|
|
unsigned long size;
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
|
|
|
|
region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
free_region_start = ALIGN(free_region_start,
|
|
|
|
PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
if (start != region_start &&
|
|
|
|
free_region_start < region_start)
|
|
|
|
region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
|
|
|
|
|
|
|
|
free_region_end = ALIGN_DOWN(free_region_end,
|
|
|
|
PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
|
|
|
|
|
|
|
|
if (end != region_end &&
|
|
|
|
free_region_end > region_end)
|
|
|
|
region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
|
|
|
|
|
|
|
|
shadow_start = kasan_mem_to_shadow((void *)region_start);
|
|
|
|
shadow_end = kasan_mem_to_shadow((void *)region_end);
|
|
|
|
|
|
|
|
if (shadow_end > shadow_start) {
|
kasan: use apply_to_existing_page_range() for releasing vmalloc shadow
kasan_release_vmalloc uses apply_to_page_range to release vmalloc
shadow. Unfortunately, apply_to_page_range can allocate memory to fill
in page table entries, which is not what we want.
Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
apply_to_page_range does allocate memory, we get a sleep in atomic bug:
BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x199/0x216 lib/dump_stack.c:118
___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
__might_sleep+0x95/0x190 kernel/sched/core.c:6753
prepare_alloc_pages mm/page_alloc.c:4681 [inline]
__alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
alloc_pages include/linux/gfp.h:532 [inline]
__get_free_pages+0xc/0x40 mm/page_alloc.c:4786
__pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
__pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
apply_to_pte_range mm/memory.c:2031 [inline]
apply_to_pmd_range mm/memory.c:2068 [inline]
apply_to_pud_range mm/memory.c:2088 [inline]
apply_to_p4d_range mm/memory.c:2108 [inline]
apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
__purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
vm_remove_mappings mm/vmalloc.c:2236 [inline]
__vunmap+0x223/0xa20 mm/vmalloc.c:2299
__vfree+0x3f/0xd0 mm/vmalloc.c:2356
__vmalloc_area_node mm/vmalloc.c:2507 [inline]
__vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
__vmalloc_node mm/vmalloc.c:2607 [inline]
__vmalloc_node_flags mm/vmalloc.c:2621 [inline]
vzalloc+0x6f/0x80 mm/vmalloc.c:2666
alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
alloc_pg_vec net/packet/af_packet.c:4258 [inline]
packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
__sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
__do_sys_setsockopt net/socket.c:2133 [inline]
__se_sys_setsockopt net/socket.c:2130 [inline]
__x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Switch to using the apply_to_existing_page_range() helper instead, which
won't allocate memory.
[akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Daniel Axtens <dja@axtens.net>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-18 12:51:46 +08:00
|
|
|
size = shadow_end - shadow_start;
|
|
|
|
apply_to_existing_page_range(&init_mm,
|
|
|
|
(unsigned long)shadow_start,
|
|
|
|
size, kasan_depopulate_vmalloc_pte,
|
|
|
|
NULL);
|
kasan: support backing vmalloc space with real shadow memory
Patch series "kasan: support backing vmalloc space with real shadow
memory", v11.
Currently, vmalloc space is backed by the early shadow page. This means
that kasan is incompatible with VMAP_STACK.
This series provides a mechanism to back vmalloc space with real,
dynamically allocated memory. I have only wired up x86, because that's
the only currently supported arch I can work with easily, but it's very
easy to wire up other architectures, and it appears that there is some
work-in-progress code to do this on arm64 and s390.
This has been discussed before in the context of VMAP_STACK:
- https://bugzilla.kernel.org/show_bug.cgi?id=202009
- https://lkml.org/lkml/2018/7/22/198
- https://lkml.org/lkml/2019/7/19/822
In terms of implementation details:
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=1)
This is unfortunate but given that this is a debug feature only, not the
end of the world. The benchmarks are also a stress-test for the vmalloc
subsystem: they're not indicative of an overall 2x slowdown!
This patch (of 4):
Hook into vmalloc and vmap, and dynamically allocate real shadow memory
to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mappings
use different shadow pages, mappings would have to be aligned to
KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
Instead, share backing space across multiple mappings. Allocate a
backing page when a mapping in vmalloc space uses a particular page of
the shadow region. This page can be shared by other vmalloc mappings
later on.
We hook in to the vmap infrastructure to lazily clean up unused shadow
memory.
To avoid the difficulties around swapping mappings around, this code
expects that the part of the shadow region that covers the vmalloc space
will not be covered by the early shadow page, but will be left unmapped.
This will require changes in arch-specific code.
This allows KASAN with VMAP_STACK, and may be helpful for architectures
that do not have a separate module space (e.g. powerpc64, which I am
currently working on). It also allows relaxing the module alignment
back to PAGE_SIZE.
Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:
- Turning on KASAN, inline instrumentation, without vmalloc, introuduces
a 4.1x-4.2x slowdown in vmalloc operations.
- Turning this on introduces the following slowdowns over KASAN:
* ~1.76x slower single-threaded (test_vmalloc.sh performance)
* ~2.18x slower when both cpus are performing operations
simultaneously (test_vmalloc.sh sequential_test_order=3D1)
This is unfortunate but given that this is a debug feature only, not the
end of the world.
The full benchmark results are:
Performance
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82
Sequential, 2 cpus
No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN
fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11
[dja@axtens.net: fixups]
Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>
Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-01 09:54:50 +08:00
|
|
|
flush_tlb_kernel_range((unsigned long)shadow_start,
|
|
|
|
(unsigned long)shadow_end);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|