2019-05-29 01:10:17 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2007-07-21 23:10:01 +08:00
|
|
|
/*
|
|
|
|
* Copyright 2007 Andi Kleen, SUSE Labs.
|
2014-09-24 01:50:57 +08:00
|
|
|
*
|
|
|
|
* This contains most of the x86 vDSO kernel-side code.
|
2007-07-21 23:10:01 +08:00
|
|
|
*/
|
|
|
|
#include <linux/mm.h>
|
2007-07-30 06:36:13 +08:00
|
|
|
#include <linux/err.h>
|
2007-07-21 23:10:01 +08:00
|
|
|
#include <linux/sched.h>
|
2017-02-09 01:51:37 +08:00
|
|
|
#include <linux/sched/task_stack.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2007-07-21 23:10:01 +08:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/random.h>
|
2009-04-12 23:07:25 +08:00
|
|
|
#include <linux/elf.h>
|
2014-09-24 01:50:52 +08:00
|
|
|
#include <linux/cpu.h>
|
2016-06-28 19:35:38 +08:00
|
|
|
#include <linux/ptrace.h>
|
2019-11-12 09:27:13 +08:00
|
|
|
#include <linux/time_namespace.h>
|
|
|
|
|
2015-12-11 11:20:21 +08:00
|
|
|
#include <asm/pvclock.h>
|
2007-07-21 23:10:01 +08:00
|
|
|
#include <asm/vgtod.h>
|
|
|
|
#include <asm/proto.h>
|
2008-01-30 20:30:41 +08:00
|
|
|
#include <asm/vdso.h>
|
2014-09-24 01:50:57 +08:00
|
|
|
#include <asm/vvar.h>
|
2019-11-12 09:27:13 +08:00
|
|
|
#include <asm/tlb.h>
|
2011-07-22 03:47:10 +08:00
|
|
|
#include <asm/page.h>
|
2014-09-24 01:50:52 +08:00
|
|
|
#include <asm/desc.h>
|
2016-01-27 05:12:04 +08:00
|
|
|
#include <asm/cpufeature.h>
|
2019-07-01 12:26:06 +08:00
|
|
|
#include <clocksource/hyperv_timer.h>
|
2007-07-21 23:10:01 +08:00
|
|
|
|
2019-11-12 09:27:10 +08:00
|
|
|
#undef _ASM_X86_VVAR_H
|
|
|
|
#define EMIT_VVAR(name, offset) \
|
|
|
|
const size_t name ## _offset = offset;
|
|
|
|
#include <asm/vvar.h>
|
|
|
|
|
|
|
|
struct vdso_data *arch_get_vdso_data(void *vvar_page)
|
|
|
|
{
|
|
|
|
return (struct vdso_data *)(vvar_page + _vdso_data_offset);
|
|
|
|
}
|
|
|
|
#undef EMIT_VVAR
|
|
|
|
|
2020-02-07 20:38:54 +08:00
|
|
|
unsigned int vclocks_used __read_mostly;
|
|
|
|
|
2014-03-18 06:22:08 +08:00
|
|
|
#if defined(CONFIG_X86_64)
|
2014-05-06 03:19:32 +08:00
|
|
|
unsigned int __read_mostly vdso64_enabled = 1;
|
2014-03-18 06:22:08 +08:00
|
|
|
#endif
|
2012-02-20 03:38:06 +08:00
|
|
|
|
x86, vdso: Reimplement vdso.so preparation in build-time C
Currently, vdso.so files are prepared and analyzed by a combination
of objcopy, nm, some linker script tricks, and some simple ELF
parsers in the kernel. Replace all of that with plain C code that
runs at build time.
All five vdso images now generate .c files that are compiled and
linked in to the kernel image.
This should cause only one userspace-visible change: the loaded vDSO
images are stripped more heavily than they used to be. Everything
outside the loadable segment is dropped. In particular, this causes
the section table and section name strings to be missing. This
should be fine: real dynamic loaders don't load or inspect these
tables anyway. The result is roughly equivalent to eu-strip's
--strip-sections option.
The purpose of this change is to enable the vvar and hpet mappings
to be moved to the page following the vDSO load segment. Currently,
it is possible for the section table to extend into the page after
the load segment, so, if we map it, it risks overlapping the vvar or
hpet page. This happens whenever the load segment is just under a
multiple of PAGE_SIZE.
The only real subtlety here is that the old code had a C file with
inline assembler that did 'call VDSO32_vsyscall' and a linker script
that defined 'VDSO32_vsyscall = __kernel_vsyscall'. This most
likely worked by accident: the linker script entry defines a symbol
associated with an address as opposed to an alias for the real
dynamic symbol __kernel_vsyscall. That caused ld to relocate the
reference at link time instead of leaving an interposable dynamic
relocation. Since the VDSO32_vsyscall hack is no longer needed, I
now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it
work. vdso2c will generate an error and abort the build if the
resulting image contains any dynamic relocations, so we won't
silently generate bad vdso images.
(Dynamic relocations are a problem because nothing will even attempt
to relocate the vdso.)
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06 03:19:34 +08:00
|
|
|
void __init init_vdso_image(const struct vdso_image *image)
|
2012-02-20 03:38:06 +08:00
|
|
|
{
|
x86, vdso: Reimplement vdso.so preparation in build-time C
Currently, vdso.so files are prepared and analyzed by a combination
of objcopy, nm, some linker script tricks, and some simple ELF
parsers in the kernel. Replace all of that with plain C code that
runs at build time.
All five vdso images now generate .c files that are compiled and
linked in to the kernel image.
This should cause only one userspace-visible change: the loaded vDSO
images are stripped more heavily than they used to be. Everything
outside the loadable segment is dropped. In particular, this causes
the section table and section name strings to be missing. This
should be fine: real dynamic loaders don't load or inspect these
tables anyway. The result is roughly equivalent to eu-strip's
--strip-sections option.
The purpose of this change is to enable the vvar and hpet mappings
to be moved to the page following the vDSO load segment. Currently,
it is possible for the section table to extend into the page after
the load segment, so, if we map it, it risks overlapping the vvar or
hpet page. This happens whenever the load segment is just under a
multiple of PAGE_SIZE.
The only real subtlety here is that the old code had a C file with
inline assembler that did 'call VDSO32_vsyscall' and a linker script
that defined 'VDSO32_vsyscall = __kernel_vsyscall'. This most
likely worked by accident: the linker script entry defines a symbol
associated with an address as opposed to an alias for the real
dynamic symbol __kernel_vsyscall. That caused ld to relocate the
reference at link time instead of leaving an interposable dynamic
relocation. Since the VDSO32_vsyscall hack is no longer needed, I
now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it
work. vdso2c will generate an error and abort the build if the
resulting image contains any dynamic relocations, so we won't
silently generate bad vdso images.
(Dynamic relocations are a problem because nothing will even attempt
to relocate the vdso.)
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06 03:19:34 +08:00
|
|
|
BUG_ON(image->size % PAGE_SIZE != 0);
|
2012-02-20 03:38:06 +08:00
|
|
|
|
x86, vdso: Reimplement vdso.so preparation in build-time C
Currently, vdso.so files are prepared and analyzed by a combination
of objcopy, nm, some linker script tricks, and some simple ELF
parsers in the kernel. Replace all of that with plain C code that
runs at build time.
All five vdso images now generate .c files that are compiled and
linked in to the kernel image.
This should cause only one userspace-visible change: the loaded vDSO
images are stripped more heavily than they used to be. Everything
outside the loadable segment is dropped. In particular, this causes
the section table and section name strings to be missing. This
should be fine: real dynamic loaders don't load or inspect these
tables anyway. The result is roughly equivalent to eu-strip's
--strip-sections option.
The purpose of this change is to enable the vvar and hpet mappings
to be moved to the page following the vDSO load segment. Currently,
it is possible for the section table to extend into the page after
the load segment, so, if we map it, it risks overlapping the vvar or
hpet page. This happens whenever the load segment is just under a
multiple of PAGE_SIZE.
The only real subtlety here is that the old code had a C file with
inline assembler that did 'call VDSO32_vsyscall' and a linker script
that defined 'VDSO32_vsyscall = __kernel_vsyscall'. This most
likely worked by accident: the linker script entry defines a symbol
associated with an address as opposed to an alias for the real
dynamic symbol __kernel_vsyscall. That caused ld to relocate the
reference at link time instead of leaving an interposable dynamic
relocation. Since the VDSO32_vsyscall hack is no longer needed, I
now use 'call __kernel_vsyscall', and I added -Bsymbolic to make it
work. vdso2c will generate an error and abort the build if the
resulting image contains any dynamic relocations, so we won't
silently generate bad vdso images.
(Dynamic relocations are a problem because nothing will even attempt
to relocate the vdso.)
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/2c4fcf45524162a34d87fdda1eb046b2a5cecee7.1399317206.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-05-06 03:19:34 +08:00
|
|
|
apply_alternatives((struct alt_instr *)(image->data + image->alt),
|
|
|
|
(struct alt_instr *)(image->data + image->alt +
|
|
|
|
image->alt_len));
|
2012-02-20 03:38:06 +08:00
|
|
|
}
|
2011-07-13 21:24:11 +08:00
|
|
|
|
2019-11-12 09:27:15 +08:00
|
|
|
static const struct vm_special_mapping vvar_mapping;
|
2007-07-21 23:10:01 +08:00
|
|
|
struct linux_binprm;
|
|
|
|
|
2018-10-27 06:04:16 +08:00
|
|
|
static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
|
2015-12-30 12:12:22 +08:00
|
|
|
struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
|
|
{
|
|
|
|
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
|
|
|
|
|
|
|
|
if (!image || (vmf->pgoff << PAGE_SHIFT) >= image->size)
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
|
|
|
|
vmf->page = virt_to_page(image->data + (vmf->pgoff << PAGE_SHIFT));
|
|
|
|
get_page(vmf->page);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-06-28 19:35:38 +08:00
|
|
|
static void vdso_fix_landing(const struct vdso_image *image,
|
|
|
|
struct vm_area_struct *new_vma)
|
|
|
|
{
|
|
|
|
#if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
|
|
|
|
if (in_ia32_syscall() && image == &vdso_image_32) {
|
|
|
|
struct pt_regs *regs = current_pt_regs();
|
|
|
|
unsigned long vdso_land = image->sym_int80_landing_pad;
|
|
|
|
unsigned long old_land_addr = vdso_land +
|
|
|
|
(unsigned long)current->mm->context.vdso;
|
|
|
|
|
|
|
|
/* Fixing userspace landing - look at do_fast_syscall_32 */
|
|
|
|
if (regs->ip == old_land_addr)
|
|
|
|
regs->ip = new_vma->vm_start + vdso_land;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static int vdso_mremap(const struct vm_special_mapping *sm,
|
|
|
|
struct vm_area_struct *new_vma)
|
|
|
|
{
|
|
|
|
const struct vdso_image *image = current->mm->context.vdso_image;
|
|
|
|
|
|
|
|
vdso_fix_landing(image, new_vma);
|
|
|
|
current->mm->context.vdso = (void __user *)new_vma->vm_start;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2015-12-30 12:12:22 +08:00
|
|
|
|
2019-11-12 09:27:13 +08:00
|
|
|
#ifdef CONFIG_TIME_NS
|
|
|
|
static struct page *find_timens_vvar_page(struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
if (likely(vma->vm_mm == current->mm))
|
|
|
|
return current->nsproxy->time_ns->vvar_page;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* VM_PFNMAP | VM_IO protect .fault() handler from being called
|
|
|
|
* through interfaces like /proc/$pid/mem or
|
|
|
|
* process_vm_{readv,writev}() as long as there's no .access()
|
|
|
|
* in special_mapping_vmops().
|
|
|
|
* For more details check_vma_flags() and __access_remote_vm()
|
|
|
|
*/
|
|
|
|
|
|
|
|
WARN(1, "vvar_page accessed remotely");
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
2019-11-12 09:27:15 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The vvar page layout depends on whether a task belongs to the root or
|
|
|
|
* non-root time namespace. Whenever a task changes its namespace, the VVAR
|
|
|
|
* page tables are cleared and then they will re-faulted with a
|
|
|
|
* corresponding layout.
|
|
|
|
* See also the comment near timens_setup_vdso_data() for details.
|
|
|
|
*/
|
|
|
|
int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = task->mm;
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
|
2020-07-06 23:49:09 +08:00
|
|
|
mmap_read_lock(mm);
|
2019-11-12 09:27:15 +08:00
|
|
|
|
|
|
|
for (vma = mm->mmap; vma; vma = vma->vm_next) {
|
|
|
|
unsigned long size = vma->vm_end - vma->vm_start;
|
|
|
|
|
|
|
|
if (vma_is_special_mapping(vma, &vvar_mapping))
|
|
|
|
zap_page_range(vma, vma->vm_start, size);
|
|
|
|
}
|
|
|
|
|
2020-07-06 23:49:09 +08:00
|
|
|
mmap_read_unlock(mm);
|
2019-11-12 09:27:15 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2019-11-12 09:27:13 +08:00
|
|
|
#else
|
|
|
|
static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-10-27 06:04:16 +08:00
|
|
|
static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
|
2015-12-30 12:12:23 +08:00
|
|
|
struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
|
|
{
|
|
|
|
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
|
2019-11-12 09:27:13 +08:00
|
|
|
unsigned long pfn;
|
2015-12-30 12:12:23 +08:00
|
|
|
long sym_offset;
|
|
|
|
|
|
|
|
if (!image)
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
|
|
|
|
sym_offset = (long)(vmf->pgoff << PAGE_SHIFT) +
|
|
|
|
image->sym_vvar_start;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sanity check: a symbol offset of zero means that the page
|
|
|
|
* does not exist for this vdso image, not that the page is at
|
|
|
|
* offset zero relative to the text mapping. This should be
|
|
|
|
* impossible here, because sym_offset should only be zero for
|
|
|
|
* the page past the end of the vvar mapping.
|
|
|
|
*/
|
|
|
|
if (sym_offset == 0)
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
|
|
|
|
if (sym_offset == image->sym_vvar_page) {
|
2019-11-12 09:27:13 +08:00
|
|
|
struct page *timens_page = find_timens_vvar_page(vma);
|
|
|
|
|
|
|
|
pfn = __pa_symbol(&__vvar_page) >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If a task belongs to a time namespace then a namespace
|
|
|
|
* specific VVAR is mapped with the sym_vvar_page offset and
|
|
|
|
* the real VVAR page is mapped with the sym_timens_page
|
|
|
|
* offset.
|
|
|
|
* See also the comment near timens_setup_vdso_data().
|
|
|
|
*/
|
2019-11-12 09:27:14 +08:00
|
|
|
if (timens_page) {
|
|
|
|
unsigned long addr;
|
|
|
|
vm_fault_t err;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Optimization: inside time namespace pre-fault
|
|
|
|
* VVAR page too. As on timens page there are only
|
|
|
|
* offsets for clocks on VVAR, it'll be faulted
|
|
|
|
* shortly by VDSO code.
|
|
|
|
*/
|
|
|
|
addr = vmf->address + (image->sym_timens_page - sym_offset);
|
|
|
|
err = vmf_insert_pfn(vma, addr, pfn);
|
|
|
|
if (unlikely(err & VM_FAULT_ERROR))
|
|
|
|
return err;
|
|
|
|
|
2019-11-12 09:27:13 +08:00
|
|
|
pfn = page_to_pfn(timens_page);
|
2019-11-12 09:27:14 +08:00
|
|
|
}
|
2019-11-12 09:27:13 +08:00
|
|
|
|
|
|
|
return vmf_insert_pfn(vma, vmf->address, pfn);
|
2015-12-30 12:12:23 +08:00
|
|
|
} else if (sym_offset == image->sym_pvclock_page) {
|
|
|
|
struct pvclock_vsyscall_time_info *pvti =
|
2017-11-09 01:19:55 +08:00
|
|
|
pvclock_get_pvti_cpu0_va();
|
2020-02-07 20:38:56 +08:00
|
|
|
if (pvti && vclock_was_used(VDSO_CLOCKMODE_PVCLOCK)) {
|
2018-10-27 06:04:16 +08:00
|
|
|
return vmf_insert_pfn_prot(vma, vmf->address,
|
|
|
|
__pa(pvti) >> PAGE_SHIFT,
|
|
|
|
pgprot_decrypted(vma->vm_page_prot));
|
2015-12-30 12:12:23 +08:00
|
|
|
}
|
2017-03-03 21:21:42 +08:00
|
|
|
} else if (sym_offset == image->sym_hvclock_page) {
|
|
|
|
struct ms_hyperv_tsc_page *tsc_pg = hv_get_tsc_page();
|
|
|
|
|
2020-02-07 20:38:56 +08:00
|
|
|
if (tsc_pg && vclock_was_used(VDSO_CLOCKMODE_HVCLOCK))
|
2018-10-27 06:04:16 +08:00
|
|
|
return vmf_insert_pfn(vma, vmf->address,
|
2019-08-14 20:32:15 +08:00
|
|
|
virt_to_phys(tsc_pg) >> PAGE_SHIFT);
|
2019-11-12 09:27:13 +08:00
|
|
|
} else if (sym_offset == image->sym_timens_page) {
|
|
|
|
struct page *timens_page = find_timens_vvar_page(vma);
|
|
|
|
|
|
|
|
if (!timens_page)
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
|
|
|
|
pfn = __pa_symbol(&__vvar_page) >> PAGE_SHIFT;
|
|
|
|
return vmf_insert_pfn(vma, vmf->address, pfn);
|
2015-12-30 12:12:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
}
|
|
|
|
|
2016-09-05 21:33:05 +08:00
|
|
|
static const struct vm_special_mapping vdso_mapping = {
|
|
|
|
.name = "[vdso]",
|
|
|
|
.fault = vdso_fault,
|
|
|
|
.mremap = vdso_mremap,
|
|
|
|
};
|
|
|
|
static const struct vm_special_mapping vvar_mapping = {
|
|
|
|
.name = "[vvar]",
|
|
|
|
.fault = vvar_fault,
|
|
|
|
};
|
|
|
|
|
2016-09-05 21:33:04 +08:00
|
|
|
/*
|
|
|
|
* Add vdso and vvar mappings to current process.
|
|
|
|
* @image - blob to map
|
|
|
|
* @addr - request a specific address (zero to map at free addr)
|
|
|
|
*/
|
|
|
|
static int map_vdso(const struct vdso_image *image, unsigned long addr)
|
2007-07-21 23:10:01 +08:00
|
|
|
{
|
|
|
|
struct mm_struct *mm = current->mm;
|
2014-05-06 03:19:35 +08:00
|
|
|
struct vm_area_struct *vma;
|
2016-09-05 21:33:04 +08:00
|
|
|
unsigned long text_start;
|
2014-05-06 03:19:35 +08:00
|
|
|
int ret = 0;
|
2016-06-28 19:35:38 +08:00
|
|
|
|
2020-06-09 12:33:25 +08:00
|
|
|
if (mmap_write_lock_killable(mm))
|
2016-05-24 07:25:54 +08:00
|
|
|
return -EINTR;
|
2014-05-06 03:19:35 +08:00
|
|
|
|
2014-07-11 09:13:15 +08:00
|
|
|
addr = get_unmapped_area(NULL, addr,
|
|
|
|
image->size - image->sym_vvar_start, 0, 0);
|
2007-07-21 23:10:01 +08:00
|
|
|
if (IS_ERR_VALUE(addr)) {
|
|
|
|
ret = addr;
|
|
|
|
goto up_fail;
|
|
|
|
}
|
|
|
|
|
2014-07-11 09:13:15 +08:00
|
|
|
text_start = addr - image->sym_vvar_start;
|
2009-06-05 20:04:51 +08:00
|
|
|
|
2014-05-06 03:19:35 +08:00
|
|
|
/*
|
|
|
|
* MAYWRITE to allow gdb to COW and set breakpoints
|
|
|
|
*/
|
2014-05-20 06:58:33 +08:00
|
|
|
vma = _install_special_mapping(mm,
|
2014-07-11 09:13:15 +08:00
|
|
|
text_start,
|
2014-05-20 06:58:33 +08:00
|
|
|
image->size,
|
|
|
|
VM_READ|VM_EXEC|
|
|
|
|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
|
2016-06-28 19:35:38 +08:00
|
|
|
&vdso_mapping);
|
2014-05-06 03:19:35 +08:00
|
|
|
|
2014-05-20 06:58:33 +08:00
|
|
|
if (IS_ERR(vma)) {
|
|
|
|
ret = PTR_ERR(vma);
|
2014-05-06 03:19:35 +08:00
|
|
|
goto up_fail;
|
2014-05-20 06:58:33 +08:00
|
|
|
}
|
2014-05-06 03:19:35 +08:00
|
|
|
|
|
|
|
vma = _install_special_mapping(mm,
|
2014-07-11 09:13:15 +08:00
|
|
|
addr,
|
|
|
|
-image->sym_vvar_start,
|
2015-12-30 12:12:23 +08:00
|
|
|
VM_READ|VM_MAYREAD|VM_IO|VM_DONTDUMP|
|
|
|
|
VM_PFNMAP,
|
2014-05-20 06:58:33 +08:00
|
|
|
&vvar_mapping);
|
2014-05-06 03:19:35 +08:00
|
|
|
|
|
|
|
if (IS_ERR(vma)) {
|
|
|
|
ret = PTR_ERR(vma);
|
2017-02-25 06:58:22 +08:00
|
|
|
do_munmap(mm, text_start, image->size, NULL);
|
2016-10-27 22:15:16 +08:00
|
|
|
} else {
|
|
|
|
current->mm->context.vdso = (void __user *)text_start;
|
|
|
|
current->mm->context.vdso_image = image;
|
2009-06-05 20:04:51 +08:00
|
|
|
}
|
2007-07-21 23:10:01 +08:00
|
|
|
|
|
|
|
up_fail:
|
2020-06-09 12:33:25 +08:00
|
|
|
mmap_write_unlock(mm);
|
2007-07-21 23:10:01 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-09-15 14:56:21 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
/*
|
|
|
|
* Put the vdso above the (randomized) stack with another randomized
|
|
|
|
* offset. This way there is no hole in the middle of address space.
|
|
|
|
* To save memory make sure it is still in the same PTE as the stack
|
|
|
|
* top. This doesn't give that many random bits.
|
|
|
|
*
|
|
|
|
* Note that this algorithm is imperfect: the distribution of the vdso
|
|
|
|
* start address within a PMD is biased toward the end.
|
|
|
|
*
|
|
|
|
* Only used for the 64-bit and x32 vdsos.
|
|
|
|
*/
|
|
|
|
static unsigned long vdso_addr(unsigned long start, unsigned len)
|
|
|
|
{
|
|
|
|
unsigned long addr, end;
|
|
|
|
unsigned offset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Round up the start address. It can start out unaligned as a result
|
|
|
|
* of stack start randomization.
|
|
|
|
*/
|
|
|
|
start = PAGE_ALIGN(start);
|
|
|
|
|
|
|
|
/* Round the lowest possible end address up to a PMD boundary. */
|
|
|
|
end = (start + len + PMD_SIZE - 1) & PMD_MASK;
|
|
|
|
if (end >= TASK_SIZE_MAX)
|
|
|
|
end = TASK_SIZE_MAX;
|
|
|
|
end -= len;
|
|
|
|
|
|
|
|
if (end > start) {
|
|
|
|
offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
|
|
|
|
addr = start + (offset << PAGE_SHIFT);
|
|
|
|
} else {
|
|
|
|
addr = start;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Forcibly align the final address in case we have a hardware
|
|
|
|
* issue that requires alignment for performance reasons.
|
|
|
|
*/
|
|
|
|
addr = align_vdso_addr(addr);
|
|
|
|
|
|
|
|
return addr;
|
|
|
|
}
|
|
|
|
|
2016-09-05 21:33:04 +08:00
|
|
|
static int map_vdso_randomized(const struct vdso_image *image)
|
|
|
|
{
|
2016-09-15 14:56:21 +08:00
|
|
|
unsigned long addr = vdso_addr(current->mm->start_stack, image->size-image->sym_vvar_start);
|
|
|
|
|
2016-09-05 21:33:04 +08:00
|
|
|
return map_vdso(image, addr);
|
|
|
|
}
|
2016-09-15 14:56:21 +08:00
|
|
|
#endif
|
2016-09-05 21:33:04 +08:00
|
|
|
|
2016-09-05 21:33:05 +08:00
|
|
|
int map_vdso_once(const struct vdso_image *image, unsigned long addr)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = current->mm;
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
|
2020-06-09 12:33:25 +08:00
|
|
|
mmap_write_lock(mm);
|
2016-09-05 21:33:05 +08:00
|
|
|
/*
|
|
|
|
* Check if we have already mapped vdso blob - fail to prevent
|
2021-03-22 05:28:53 +08:00
|
|
|
* abusing from userspace install_special_mapping, which may
|
2016-09-05 21:33:05 +08:00
|
|
|
* not do accounting and rlimit right.
|
|
|
|
* We could search vma near context.vdso, but it's a slowpath,
|
2018-12-03 17:47:34 +08:00
|
|
|
* so let's explicitly check all VMAs to be completely sure.
|
2016-09-05 21:33:05 +08:00
|
|
|
*/
|
|
|
|
for (vma = mm->mmap; vma; vma = vma->vm_next) {
|
|
|
|
if (vma_is_special_mapping(vma, &vdso_mapping) ||
|
|
|
|
vma_is_special_mapping(vma, &vvar_mapping)) {
|
2020-06-09 12:33:25 +08:00
|
|
|
mmap_write_unlock(mm);
|
2016-09-05 21:33:05 +08:00
|
|
|
return -EEXIST;
|
|
|
|
}
|
|
|
|
}
|
2020-06-09 12:33:25 +08:00
|
|
|
mmap_write_unlock(mm);
|
2016-09-05 21:33:05 +08:00
|
|
|
|
|
|
|
return map_vdso(image, addr);
|
|
|
|
}
|
|
|
|
|
2015-06-22 19:55:15 +08:00
|
|
|
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
|
2014-05-06 03:19:35 +08:00
|
|
|
static int load_vdso32(void)
|
|
|
|
{
|
|
|
|
if (vdso32_enabled != 1) /* Other values all mean "disabled" */
|
|
|
|
return 0;
|
|
|
|
|
2016-09-05 21:33:04 +08:00
|
|
|
return map_vdso(&vdso_image_32, 0);
|
2014-05-06 03:19:35 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
2012-02-20 03:38:06 +08:00
|
|
|
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
|
|
|
{
|
2014-05-06 03:19:35 +08:00
|
|
|
if (!vdso64_enabled)
|
|
|
|
return 0;
|
|
|
|
|
2016-09-05 21:33:04 +08:00
|
|
|
return map_vdso_randomized(&vdso_image_64);
|
2012-02-20 03:38:06 +08:00
|
|
|
}
|
|
|
|
|
2014-05-06 03:19:35 +08:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
|
2020-10-04 11:25:34 +08:00
|
|
|
int uses_interp, bool x32)
|
2014-05-06 03:19:35 +08:00
|
|
|
{
|
2012-02-20 03:38:06 +08:00
|
|
|
#ifdef CONFIG_X86_X32_ABI
|
2020-10-04 11:25:34 +08:00
|
|
|
if (x32) {
|
2014-05-06 03:19:35 +08:00
|
|
|
if (!vdso64_enabled)
|
|
|
|
return 0;
|
2016-09-05 21:33:04 +08:00
|
|
|
return map_vdso_randomized(&vdso_image_x32);
|
2014-05-06 03:19:35 +08:00
|
|
|
}
|
|
|
|
#endif
|
2015-06-22 19:55:15 +08:00
|
|
|
#ifdef CONFIG_IA32_EMULATION
|
2014-05-06 03:19:35 +08:00
|
|
|
return load_vdso32();
|
2015-06-22 19:55:15 +08:00
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
2014-05-06 03:19:35 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#else
|
|
|
|
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
2012-02-20 03:38:06 +08:00
|
|
|
{
|
2014-05-06 03:19:35 +08:00
|
|
|
return load_vdso32();
|
2012-02-20 03:38:06 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-11-28 03:32:32 +08:00
|
|
|
bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
|
|
|
|
const struct vdso_image *image = current->mm->context.vdso_image;
|
|
|
|
unsigned long vdso = (unsigned long) current->mm->context.vdso;
|
|
|
|
|
|
|
|
if (in_ia32_syscall() && image == &vdso_image_32) {
|
|
|
|
if (regs->ip == vdso + image->sym_vdso32_sigreturn_landing_pad ||
|
|
|
|
regs->ip == vdso + image->sym_vdso32_rt_sigreturn_landing_pad)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-05-06 03:19:35 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2007-07-21 23:10:01 +08:00
|
|
|
static __init int vdso_setup(char *s)
|
|
|
|
{
|
2014-05-06 03:19:32 +08:00
|
|
|
vdso64_enabled = simple_strtoul(s, NULL, 0);
|
2007-07-21 23:10:01 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
__setup("vdso=", vdso_setup);
|
2014-09-24 01:50:52 +08:00
|
|
|
|
2014-09-24 01:50:57 +08:00
|
|
|
static int __init init_vdso(void)
|
2014-09-24 01:50:52 +08:00
|
|
|
{
|
2020-02-07 20:38:56 +08:00
|
|
|
BUILD_BUG_ON(VDSO_CLOCKMODE_MAX >= 32);
|
2020-02-07 20:38:54 +08:00
|
|
|
|
2014-09-24 01:50:57 +08:00
|
|
|
init_vdso_image(&vdso_image_64);
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_X32_ABI
|
|
|
|
init_vdso_image(&vdso_image_x32);
|
|
|
|
#endif
|
|
|
|
|
2018-09-19 07:08:59 +08:00
|
|
|
return 0;
|
2014-09-24 01:50:52 +08:00
|
|
|
}
|
2014-09-24 01:50:57 +08:00
|
|
|
subsys_initcall(init_vdso);
|
|
|
|
#endif /* CONFIG_X86_64 */
|