License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2008-02-03 04:10:33 +08:00
|
|
|
|
#
|
|
|
|
|
# General architecture dependent options
|
|
|
|
|
#
|
2008-02-03 04:10:36 +08:00
|
|
|
|
|
2018-07-31 19:39:30 +08:00
|
|
|
|
#
|
|
|
|
|
# Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can
|
|
|
|
|
# override the default values in this file.
|
|
|
|
|
#
|
|
|
|
|
source "arch/$(SRCARCH)/Kconfig"
|
|
|
|
|
|
2018-07-31 19:39:33 +08:00
|
|
|
|
menu "General architecture-dependent options"
|
|
|
|
|
|
crash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE
Patch series "kexec/fadump: remove dependency with CONFIG_KEXEC and
reuse crashkernel parameter for fadump", v4.
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
This patchset removes dependency with CONFIG_KEXEC for crashkernel
parameter and vmcoreinfo related code as it can be reused without kexec
support. Also, crashkernel parameter is reused instead of
fadump_reserve_mem to reserve memory for fadump.
The first patch moves crashkernel parameter parsing and vmcoreinfo
related code under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE. The
second patch reuses the definitions of append_elf_note() & final_note()
functions under CONFIG_CRASH_CORE in IA64 arch code. The third patch
removes dependency on CONFIG_KEXEC for firmware-assisted dump (fadump)
in powerpc. The next patch reuses crashkernel parameter for reserving
memory for fadump, instead of the fadump_reserve_mem parameter. This
has the advantage of using all syntaxes crashkernel parameter supports,
for fadump as well. The last patch updates fadump kernel documentation
about use of crashkernel parameter.
This patch (of 5):
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
But currently, code related to vmcoreinfo and parsing of crashkernel
parameter is built under CONFIG_KEXEC_CORE. This patch introduces
CONFIG_CRASH_CORE and moves the above mentioned code under this config,
allowing code reuse without dependency on CONFIG_KEXEC. There is no
functional change with this patch.
Link: http://lkml.kernel.org/r/149035338104.6881.4550894432615189948.stgit@hbathini.in.ibm.com
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-09 06:56:18 +08:00
|
|
|
|
config CRASH_CORE
|
|
|
|
|
bool
|
|
|
|
|
|
2015-09-10 06:38:55 +08:00
|
|
|
|
config KEXEC_CORE
|
crash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE
Patch series "kexec/fadump: remove dependency with CONFIG_KEXEC and
reuse crashkernel parameter for fadump", v4.
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
This patchset removes dependency with CONFIG_KEXEC for crashkernel
parameter and vmcoreinfo related code as it can be reused without kexec
support. Also, crashkernel parameter is reused instead of
fadump_reserve_mem to reserve memory for fadump.
The first patch moves crashkernel parameter parsing and vmcoreinfo
related code under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE. The
second patch reuses the definitions of append_elf_note() & final_note()
functions under CONFIG_CRASH_CORE in IA64 arch code. The third patch
removes dependency on CONFIG_KEXEC for firmware-assisted dump (fadump)
in powerpc. The next patch reuses crashkernel parameter for reserving
memory for fadump, instead of the fadump_reserve_mem parameter. This
has the advantage of using all syntaxes crashkernel parameter supports,
for fadump as well. The last patch updates fadump kernel documentation
about use of crashkernel parameter.
This patch (of 5):
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
But currently, code related to vmcoreinfo and parsing of crashkernel
parameter is built under CONFIG_KEXEC_CORE. This patch introduces
CONFIG_CRASH_CORE and moves the above mentioned code under this config,
allowing code reuse without dependency on CONFIG_KEXEC. There is no
functional change with this patch.
Link: http://lkml.kernel.org/r/149035338104.6881.4550894432615189948.stgit@hbathini.in.ibm.com
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-09 06:56:18 +08:00
|
|
|
|
select CRASH_CORE
|
2015-09-10 06:38:55 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2019-08-24 03:49:13 +08:00
|
|
|
|
config KEXEC_ELF
|
|
|
|
|
bool
|
|
|
|
|
|
powerpc: ima: get the kexec buffer passed by the previous kernel
Patch series "ima: carry the measurement list across kexec", v8.
The TPM PCRs are only reset on a hard reboot. In order to validate a
TPM's quote after a soft reboot (eg. kexec -e), the IMA measurement
list of the running kernel must be saved and then restored on the
subsequent boot, possibly of a different architecture.
The existing securityfs binary_runtime_measurements file conveniently
provides a serialized format of the IMA measurement list. This patch
set serializes the measurement list in this format and restores it.
Up to now, the binary_runtime_measurements was defined as architecture
native format. The assumption being that userspace could and would
handle any architecture conversions. With the ability of carrying the
measurement list across kexec, possibly from one architecture to a
different one, the per boot architecture information is lost and with it
the ability of recalculating the template digest hash. To resolve this
problem, without breaking the existing ABI, this patch set introduces
the boot command line option "ima_canonical_fmt", which is arbitrarily
defined as little endian.
The need for this boot command line option will be limited to the
existing version 1 format of the binary_runtime_measurements.
Subsequent formats will be defined as canonical format (eg. TPM 2.0
support for larger digests).
A simplified method of Thiago Bauermann's "kexec buffer handover" patch
series for carrying the IMA measurement list across kexec is included in
this patch set. The simplified method requires all file measurements be
taken prior to executing the kexec load, as subsequent measurements will
not be carried across the kexec and restored.
This patch (of 10):
The IMA kexec buffer allows the currently running kernel to pass the
measurement list via a kexec segment to the kernel that will be kexec'd.
The second kernel can check whether the previous kernel sent the buffer
and retrieve it.
This is the architecture-specific part which enables IMA to receive the
measurement list passed by the previous kernel. It will be used in the
next patch.
The change in machine_kexec_64.c is to factor out the logic of removing
an FDT memory reservation so that it can be used by remove_ima_buffer.
Link: http://lkml.kernel.org/r/1480554346-29071-2-git-send-email-zohar@linux.vnet.ibm.com
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Andreas Steffen <andreas.steffen@strongswan.org>
Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
Cc: Josh Sklar <sklar@linux.vnet.ibm.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stewart Smith <stewart@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-20 08:22:32 +08:00
|
|
|
|
config HAVE_IMA_KEXEC
|
|
|
|
|
bool
|
|
|
|
|
|
2022-04-23 18:07:49 +08:00
|
|
|
|
config ARCH_HAS_SUBPAGE_FAULTS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Select if the architecture can check permissions at sub-page
|
|
|
|
|
granularity (e.g. arm64 MTE). The probe_user_*() functions
|
|
|
|
|
must be implemented.
|
|
|
|
|
|
2018-05-29 23:48:27 +08:00
|
|
|
|
config HOTPLUG_SMT
|
|
|
|
|
bool
|
|
|
|
|
|
2020-07-23 05:59:56 +08:00
|
|
|
|
config GENERIC_ENTRY
|
|
|
|
|
bool
|
|
|
|
|
|
2008-02-03 04:10:36 +08:00
|
|
|
|
config KPROBES
|
|
|
|
|
bool "Kprobes"
|
2010-09-13 18:25:41 +08:00
|
|
|
|
depends on MODULES
|
2008-02-03 04:10:36 +08:00
|
|
|
|
depends on HAVE_KPROBES
|
2010-09-13 18:25:41 +08:00
|
|
|
|
select KALLSYMS
|
2022-03-18 02:05:09 +08:00
|
|
|
|
select TASKS_RCU if PREEMPTION
|
2008-02-03 04:10:36 +08:00
|
|
|
|
help
|
|
|
|
|
Kprobes allows you to trap at almost any kernel address and
|
|
|
|
|
execute a callback function. register_kprobe() establishes
|
|
|
|
|
a probepoint and specifies the callback. Kprobes is useful
|
|
|
|
|
for kernel debugging, non-intrusive instrumentation and testing.
|
|
|
|
|
If in doubt, say "N".
|
|
|
|
|
|
2010-10-30 00:33:43 +08:00
|
|
|
|
config JUMP_LABEL
|
2019-12-05 08:50:26 +08:00
|
|
|
|
bool "Optimize very unlikely/likely branches"
|
|
|
|
|
depends on HAVE_ARCH_JUMP_LABEL
|
|
|
|
|
depends on CC_HAS_ASM_GOTO
|
2022-04-19 00:50:39 +08:00
|
|
|
|
select OBJTOOL if HAVE_JUMP_LABEL_HACK
|
2019-12-05 08:50:26 +08:00
|
|
|
|
help
|
|
|
|
|
This option enables a transparent branch optimization that
|
2012-02-24 15:31:31 +08:00
|
|
|
|
makes certain almost-always-true or almost-always-false branch
|
|
|
|
|
conditions even cheaper to execute within the kernel.
|
|
|
|
|
|
|
|
|
|
Certain performance-sensitive kernel code, such as trace points,
|
|
|
|
|
scheduler functionality, networking code and KVM have such
|
|
|
|
|
branches and include support for this optimization technique.
|
|
|
|
|
|
2019-12-05 08:50:26 +08:00
|
|
|
|
If it is detected that the compiler has support for "asm goto",
|
2012-02-24 15:31:31 +08:00
|
|
|
|
the kernel will compile such branches with just a nop
|
|
|
|
|
instruction. When the condition flag is toggled to true, the
|
|
|
|
|
nop will be converted to a jump instruction to execute the
|
|
|
|
|
conditional block of instructions.
|
|
|
|
|
|
|
|
|
|
This technique lowers overhead and stress on the branch prediction
|
|
|
|
|
of the processor and generally makes the kernel faster. The update
|
|
|
|
|
of the condition is slower, but those are always very rare.
|
2010-10-30 00:33:43 +08:00
|
|
|
|
|
2012-02-24 15:31:31 +08:00
|
|
|
|
( On 32-bit x86, the necessary options added to the compiler
|
|
|
|
|
flags may increase the size of the kernel slightly. )
|
2010-10-30 00:33:43 +08:00
|
|
|
|
|
2015-07-28 00:32:09 +08:00
|
|
|
|
config STATIC_KEYS_SELFTEST
|
|
|
|
|
bool "Static key selftest"
|
|
|
|
|
depends on JUMP_LABEL
|
|
|
|
|
help
|
|
|
|
|
Boot time self-test of the branch patching code.
|
|
|
|
|
|
2020-08-18 21:57:46 +08:00
|
|
|
|
config STATIC_CALL_SELFTEST
|
|
|
|
|
bool "Static call selftest"
|
|
|
|
|
depends on HAVE_STATIC_CALL
|
|
|
|
|
help
|
|
|
|
|
Boot time self-test of the call patching code.
|
|
|
|
|
|
2010-02-25 21:34:07 +08:00
|
|
|
|
config OPTPROBES
|
2010-03-16 01:00:54 +08:00
|
|
|
|
def_bool y
|
|
|
|
|
depends on KPROBES && HAVE_OPTPROBES
|
2019-07-27 05:19:38 +08:00
|
|
|
|
select TASKS_RCU if PREEMPTION
|
2010-02-25 21:34:07 +08:00
|
|
|
|
|
2012-09-28 16:15:20 +08:00
|
|
|
|
config KPROBES_ON_FTRACE
|
|
|
|
|
def_bool y
|
|
|
|
|
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
|
|
|
|
|
depends on DYNAMIC_FTRACE_WITH_REGS
|
|
|
|
|
help
|
|
|
|
|
If function tracer is enabled and the arch supports full
|
|
|
|
|
passing of pt_regs to function tracing, then kprobes can
|
|
|
|
|
optimize on top of function tracing.
|
|
|
|
|
|
uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints
Add uprobes support to the core kernel, with x86 support.
This commit adds the kernel facilities, the actual uprobes
user-space ABI and perf probe support comes in later commits.
General design:
Uprobes are maintained in an rb-tree indexed by inode and offset
(the offset here is from the start of the mapping). For a unique
(inode, offset) tuple, there can be at most one uprobe in the
rb-tree.
Since the (inode, offset) tuple identifies a unique uprobe, more
than one user may be interested in the same uprobe. This provides
the ability to connect multiple 'consumers' to the same uprobe.
Each consumer defines a handler and a filter (optional). The
'handler' is run every time the uprobe is hit, if it matches the
'filter' criteria.
The first consumer of a uprobe causes the breakpoint to be
inserted at the specified address and subsequent consumers are
appended to this list. On subsequent probes, the consumer gets
appended to the existing list of consumers. The breakpoint is
removed when the last consumer unregisters. For all other
unregisterations, the consumer is removed from the list of
consumers.
Given a inode, we get a list of the mms that have mapped the
inode. Do the actual registration if mm maps the page where a
probe needs to be inserted/removed.
We use a temporary list to walk through the vmas that map the
inode.
- The number of maps that map the inode, is not known before we
walk the rmap and keeps changing.
- extending vm_area_struct wasn't recommended, it's a
size-critical data structure.
- There can be more than one maps of the inode in the same mm.
We add callbacks to the mmap methods to keep an eye on text vmas
that are of interest to uprobes. When a vma of interest is mapped,
we insert the breakpoint at the right address.
Uprobe works by replacing the instruction at the address defined
by (inode, offset) with the arch specific breakpoint
instruction. We save a copy of the original instruction at the
uprobed address.
This is needed for:
a. executing the instruction out-of-line (xol).
b. instruction analysis for any subsequent fixups.
c. restoring the instruction back when the uprobe is unregistered.
We insert or delete a breakpoint instruction, and this
breakpoint instruction is assumed to be the smallest instruction
available on the platform. For fixed size instruction platforms
this is trivially true, for variable size instruction platforms
the breakpoint instruction is typically the smallest (often a
single byte).
Writing the instruction is done by COWing the page and changing
the instruction during the copy, this even though most platforms
allow atomic writes of the breakpoint instruction. This also
mirrors the behaviour of a ptrace() memory write to a PRIVATE
file map.
The core worker is derived from KSM's replace_page() logic.
In essence, similar to KSM:
a. allocate a new page and copy over contents of the page that
has the uprobed vaddr
b. modify the copy and insert the breakpoint at the required
address
c. switch the original page with the copy containing the
breakpoint
d. flush page tables.
replace_page() is being replicated here because of some minor
changes in the type of pages and also because Hugh Dickins had
plans to improve replace_page() for KSM specific work.
Instruction analysis on x86 is based on instruction decoder and
determines if an instruction can be probed and determines the
necessary fixups after singlestep. Instruction analysis is done
at probe insertion time so that we avoid having to repeat the
same analysis every time a probe is hit.
A lot of code here is due to the improvement/suggestions/inputs
from Peter Zijlstra.
Changelog:
(v10):
- Add code to clear REX.B prefix as suggested by Denys Vlasenko
and Masami Hiramatsu.
(v9):
- Use insn_offset_modrm as suggested by Masami Hiramatsu.
(v7):
Handle comments from Peter Zijlstra:
- Dont take reference to inode. (expect inode to uprobe_register to be sane).
- Use PTR_ERR to set the return value.
- No need to take reference to inode.
- use PTR_ERR to return error value.
- register and uprobe_unregister share code.
(v5):
- Modified del_consumer as per comments from Peter.
- Drop reference to inode before dropping reference to uprobe.
- Use i_size_read(inode) instead of inode->i_size.
- Ensure uprobe->consumers is NULL, before __uprobe_unregister() is called.
- Includes errno.h as recommended by Stephen Rothwell to fix a build issue
on sparc defconfig
- Remove restrictions while unregistering.
- Earlier code leaked inode references under some conditions while
registering/unregistering.
- Continue the vma-rmap walk even if the intermediate vma doesnt
meet the requirements.
- Validate the vma found by find_vma before inserting/removing the
breakpoint
- Call del_consumer under mutex_lock.
- Use hash locks.
- Handle mremap.
- Introduce find_least_offset_node() instead of close match logic in
find_uprobe
- Uprobes no more depends on MM_OWNER; No reference to task_structs
while inserting/removing a probe.
- Uses read_mapping_page instead of grab_cache_page so that the pages
have valid content.
- pass NULL to get_user_pages for the task parameter.
- call SetPageUptodate on the new page allocated in write_opcode.
- fix leaking a reference to the new page under certain conditions.
- Include Instruction Decoder if Uprobes gets defined.
- Remove const attributes for instruction prefix arrays.
- Uses mm_context to know if the application is 32 bit.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Also-written-by: Jim Keniston <jkenisto@us.ibm.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Anton Arapov <anton@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linux-mm <linux-mm@kvack.org>
Link: http://lkml.kernel.org/r/20120209092642.GE16600@linux.vnet.ibm.com
[ Made various small edits to the commit log ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-02-09 17:26:42 +08:00
|
|
|
|
config UPROBES
|
2014-03-07 23:32:22 +08:00
|
|
|
|
def_bool n
|
2016-10-13 12:36:13 +08:00
|
|
|
|
depends on ARCH_SUPPORTS_UPROBES
|
uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints
Add uprobes support to the core kernel, with x86 support.
This commit adds the kernel facilities, the actual uprobes
user-space ABI and perf probe support comes in later commits.
General design:
Uprobes are maintained in an rb-tree indexed by inode and offset
(the offset here is from the start of the mapping). For a unique
(inode, offset) tuple, there can be at most one uprobe in the
rb-tree.
Since the (inode, offset) tuple identifies a unique uprobe, more
than one user may be interested in the same uprobe. This provides
the ability to connect multiple 'consumers' to the same uprobe.
Each consumer defines a handler and a filter (optional). The
'handler' is run every time the uprobe is hit, if it matches the
'filter' criteria.
The first consumer of a uprobe causes the breakpoint to be
inserted at the specified address and subsequent consumers are
appended to this list. On subsequent probes, the consumer gets
appended to the existing list of consumers. The breakpoint is
removed when the last consumer unregisters. For all other
unregisterations, the consumer is removed from the list of
consumers.
Given a inode, we get a list of the mms that have mapped the
inode. Do the actual registration if mm maps the page where a
probe needs to be inserted/removed.
We use a temporary list to walk through the vmas that map the
inode.
- The number of maps that map the inode, is not known before we
walk the rmap and keeps changing.
- extending vm_area_struct wasn't recommended, it's a
size-critical data structure.
- There can be more than one maps of the inode in the same mm.
We add callbacks to the mmap methods to keep an eye on text vmas
that are of interest to uprobes. When a vma of interest is mapped,
we insert the breakpoint at the right address.
Uprobe works by replacing the instruction at the address defined
by (inode, offset) with the arch specific breakpoint
instruction. We save a copy of the original instruction at the
uprobed address.
This is needed for:
a. executing the instruction out-of-line (xol).
b. instruction analysis for any subsequent fixups.
c. restoring the instruction back when the uprobe is unregistered.
We insert or delete a breakpoint instruction, and this
breakpoint instruction is assumed to be the smallest instruction
available on the platform. For fixed size instruction platforms
this is trivially true, for variable size instruction platforms
the breakpoint instruction is typically the smallest (often a
single byte).
Writing the instruction is done by COWing the page and changing
the instruction during the copy, this even though most platforms
allow atomic writes of the breakpoint instruction. This also
mirrors the behaviour of a ptrace() memory write to a PRIVATE
file map.
The core worker is derived from KSM's replace_page() logic.
In essence, similar to KSM:
a. allocate a new page and copy over contents of the page that
has the uprobed vaddr
b. modify the copy and insert the breakpoint at the required
address
c. switch the original page with the copy containing the
breakpoint
d. flush page tables.
replace_page() is being replicated here because of some minor
changes in the type of pages and also because Hugh Dickins had
plans to improve replace_page() for KSM specific work.
Instruction analysis on x86 is based on instruction decoder and
determines if an instruction can be probed and determines the
necessary fixups after singlestep. Instruction analysis is done
at probe insertion time so that we avoid having to repeat the
same analysis every time a probe is hit.
A lot of code here is due to the improvement/suggestions/inputs
from Peter Zijlstra.
Changelog:
(v10):
- Add code to clear REX.B prefix as suggested by Denys Vlasenko
and Masami Hiramatsu.
(v9):
- Use insn_offset_modrm as suggested by Masami Hiramatsu.
(v7):
Handle comments from Peter Zijlstra:
- Dont take reference to inode. (expect inode to uprobe_register to be sane).
- Use PTR_ERR to set the return value.
- No need to take reference to inode.
- use PTR_ERR to return error value.
- register and uprobe_unregister share code.
(v5):
- Modified del_consumer as per comments from Peter.
- Drop reference to inode before dropping reference to uprobe.
- Use i_size_read(inode) instead of inode->i_size.
- Ensure uprobe->consumers is NULL, before __uprobe_unregister() is called.
- Includes errno.h as recommended by Stephen Rothwell to fix a build issue
on sparc defconfig
- Remove restrictions while unregistering.
- Earlier code leaked inode references under some conditions while
registering/unregistering.
- Continue the vma-rmap walk even if the intermediate vma doesnt
meet the requirements.
- Validate the vma found by find_vma before inserting/removing the
breakpoint
- Call del_consumer under mutex_lock.
- Use hash locks.
- Handle mremap.
- Introduce find_least_offset_node() instead of close match logic in
find_uprobe
- Uprobes no more depends on MM_OWNER; No reference to task_structs
while inserting/removing a probe.
- Uses read_mapping_page instead of grab_cache_page so that the pages
have valid content.
- pass NULL to get_user_pages for the task parameter.
- call SetPageUptodate on the new page allocated in write_opcode.
- fix leaking a reference to the new page under certain conditions.
- Include Instruction Decoder if Uprobes gets defined.
- Remove const attributes for instruction prefix arrays.
- Uses mm_context to know if the application is 32 bit.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Also-written-by: Jim Keniston <jkenisto@us.ibm.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Anton Arapov <anton@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linux-mm <linux-mm@kvack.org>
Link: http://lkml.kernel.org/r/20120209092642.GE16600@linux.vnet.ibm.com
[ Made various small edits to the commit log ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-02-09 17:26:42 +08:00
|
|
|
|
help
|
2012-02-17 16:27:41 +08:00
|
|
|
|
Uprobes is the user-space counterpart to kprobes: they
|
|
|
|
|
enable instrumentation applications (such as 'perf probe')
|
|
|
|
|
to establish unintrusive probes in user-space binaries and
|
|
|
|
|
libraries, by executing handler functions when the probes
|
|
|
|
|
are hit by user-space applications.
|
|
|
|
|
|
|
|
|
|
( These probes come in the form of single-byte breakpoints,
|
|
|
|
|
managed by the kernel and kept transparent to the probed
|
|
|
|
|
application. )
|
uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints
Add uprobes support to the core kernel, with x86 support.
This commit adds the kernel facilities, the actual uprobes
user-space ABI and perf probe support comes in later commits.
General design:
Uprobes are maintained in an rb-tree indexed by inode and offset
(the offset here is from the start of the mapping). For a unique
(inode, offset) tuple, there can be at most one uprobe in the
rb-tree.
Since the (inode, offset) tuple identifies a unique uprobe, more
than one user may be interested in the same uprobe. This provides
the ability to connect multiple 'consumers' to the same uprobe.
Each consumer defines a handler and a filter (optional). The
'handler' is run every time the uprobe is hit, if it matches the
'filter' criteria.
The first consumer of a uprobe causes the breakpoint to be
inserted at the specified address and subsequent consumers are
appended to this list. On subsequent probes, the consumer gets
appended to the existing list of consumers. The breakpoint is
removed when the last consumer unregisters. For all other
unregisterations, the consumer is removed from the list of
consumers.
Given a inode, we get a list of the mms that have mapped the
inode. Do the actual registration if mm maps the page where a
probe needs to be inserted/removed.
We use a temporary list to walk through the vmas that map the
inode.
- The number of maps that map the inode, is not known before we
walk the rmap and keeps changing.
- extending vm_area_struct wasn't recommended, it's a
size-critical data structure.
- There can be more than one maps of the inode in the same mm.
We add callbacks to the mmap methods to keep an eye on text vmas
that are of interest to uprobes. When a vma of interest is mapped,
we insert the breakpoint at the right address.
Uprobe works by replacing the instruction at the address defined
by (inode, offset) with the arch specific breakpoint
instruction. We save a copy of the original instruction at the
uprobed address.
This is needed for:
a. executing the instruction out-of-line (xol).
b. instruction analysis for any subsequent fixups.
c. restoring the instruction back when the uprobe is unregistered.
We insert or delete a breakpoint instruction, and this
breakpoint instruction is assumed to be the smallest instruction
available on the platform. For fixed size instruction platforms
this is trivially true, for variable size instruction platforms
the breakpoint instruction is typically the smallest (often a
single byte).
Writing the instruction is done by COWing the page and changing
the instruction during the copy, this even though most platforms
allow atomic writes of the breakpoint instruction. This also
mirrors the behaviour of a ptrace() memory write to a PRIVATE
file map.
The core worker is derived from KSM's replace_page() logic.
In essence, similar to KSM:
a. allocate a new page and copy over contents of the page that
has the uprobed vaddr
b. modify the copy and insert the breakpoint at the required
address
c. switch the original page with the copy containing the
breakpoint
d. flush page tables.
replace_page() is being replicated here because of some minor
changes in the type of pages and also because Hugh Dickins had
plans to improve replace_page() for KSM specific work.
Instruction analysis on x86 is based on instruction decoder and
determines if an instruction can be probed and determines the
necessary fixups after singlestep. Instruction analysis is done
at probe insertion time so that we avoid having to repeat the
same analysis every time a probe is hit.
A lot of code here is due to the improvement/suggestions/inputs
from Peter Zijlstra.
Changelog:
(v10):
- Add code to clear REX.B prefix as suggested by Denys Vlasenko
and Masami Hiramatsu.
(v9):
- Use insn_offset_modrm as suggested by Masami Hiramatsu.
(v7):
Handle comments from Peter Zijlstra:
- Dont take reference to inode. (expect inode to uprobe_register to be sane).
- Use PTR_ERR to set the return value.
- No need to take reference to inode.
- use PTR_ERR to return error value.
- register and uprobe_unregister share code.
(v5):
- Modified del_consumer as per comments from Peter.
- Drop reference to inode before dropping reference to uprobe.
- Use i_size_read(inode) instead of inode->i_size.
- Ensure uprobe->consumers is NULL, before __uprobe_unregister() is called.
- Includes errno.h as recommended by Stephen Rothwell to fix a build issue
on sparc defconfig
- Remove restrictions while unregistering.
- Earlier code leaked inode references under some conditions while
registering/unregistering.
- Continue the vma-rmap walk even if the intermediate vma doesnt
meet the requirements.
- Validate the vma found by find_vma before inserting/removing the
breakpoint
- Call del_consumer under mutex_lock.
- Use hash locks.
- Handle mremap.
- Introduce find_least_offset_node() instead of close match logic in
find_uprobe
- Uprobes no more depends on MM_OWNER; No reference to task_structs
while inserting/removing a probe.
- Uses read_mapping_page instead of grab_cache_page so that the pages
have valid content.
- pass NULL to get_user_pages for the task parameter.
- call SetPageUptodate on the new page allocated in write_opcode.
- fix leaking a reference to the new page under certain conditions.
- Include Instruction Decoder if Uprobes gets defined.
- Remove const attributes for instruction prefix arrays.
- Uses mm_context to know if the application is 32 bit.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Also-written-by: Jim Keniston <jkenisto@us.ibm.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Anton Arapov <anton@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linux-mm <linux-mm@kvack.org>
Link: http://lkml.kernel.org/r/20120209092642.GE16600@linux.vnet.ibm.com
[ Made various small edits to the commit log ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2012-02-09 17:26:42 +08:00
|
|
|
|
|
2020-12-15 01:33:51 +08:00
|
|
|
|
config HAVE_64BIT_ALIGNED_ACCESS
|
|
|
|
|
def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
|
|
|
|
|
help
|
|
|
|
|
Some architectures require 64 bit accesses to be 64 bit
|
|
|
|
|
aligned, which also requires structs containing 64 bit values
|
|
|
|
|
to be 64 bit aligned too. This includes some 32 bit
|
|
|
|
|
architectures which can do 64 bit accesses, as well as 64 bit
|
|
|
|
|
architectures without unaligned access.
|
|
|
|
|
|
|
|
|
|
This symbol should be selected by an architecture if 64 bit
|
|
|
|
|
accesses are required to be 64 bit aligned in this way even
|
|
|
|
|
though it is not a 64 bit architecture.
|
|
|
|
|
|
2021-01-19 17:53:26 +08:00
|
|
|
|
See Documentation/core-api/unaligned-memory-access.rst for
|
|
|
|
|
more information on the topic of unaligned memory accesses.
|
2020-12-15 01:33:51 +08:00
|
|
|
|
|
2008-07-25 16:45:33 +08:00
|
|
|
|
config HAVE_EFFICIENT_UNALIGNED_ACCESS
|
2008-10-16 13:01:38 +08:00
|
|
|
|
bool
|
2008-07-25 16:45:33 +08:00
|
|
|
|
help
|
|
|
|
|
Some architectures are unable to perform unaligned accesses
|
|
|
|
|
without the use of get_unaligned/put_unaligned. Others are
|
|
|
|
|
unable to perform such accesses efficiently (e.g. trap on
|
|
|
|
|
unaligned access and require fixing it up in the exception
|
|
|
|
|
handler.)
|
|
|
|
|
|
|
|
|
|
This symbol should be selected by an architecture if it can
|
|
|
|
|
perform unaligned accesses efficiently to allow different
|
|
|
|
|
code paths to be selected for these cases. Some network
|
|
|
|
|
drivers, for example, could opt to not fix up alignment
|
|
|
|
|
problems with received packets if doing so would not help
|
|
|
|
|
much.
|
|
|
|
|
|
2020-06-23 21:31:38 +08:00
|
|
|
|
See Documentation/core-api/unaligned-memory-access.rst for more
|
2008-07-25 16:45:33 +08:00
|
|
|
|
information on the topic of unaligned memory accesses.
|
|
|
|
|
|
2012-12-04 00:25:40 +08:00
|
|
|
|
config ARCH_USE_BUILTIN_BSWAP
|
2019-12-05 08:50:26 +08:00
|
|
|
|
bool
|
|
|
|
|
help
|
2012-12-04 00:25:40 +08:00
|
|
|
|
Modern versions of GCC (since 4.4) have builtin functions
|
|
|
|
|
for handling byte-swapping. Using these, instead of the old
|
|
|
|
|
inline assembler that the architecture code provides in the
|
|
|
|
|
__arch_bswapXX() macros, allows the compiler to see what's
|
|
|
|
|
happening and offers more opportunity for optimisation. In
|
|
|
|
|
particular, the compiler will be able to combine the byteswap
|
|
|
|
|
with a nearby load or store and use load-and-swap or
|
|
|
|
|
store-and-swap instructions if the architecture has them. It
|
|
|
|
|
should almost *never* result in code which is worse than the
|
|
|
|
|
hand-coded assembler in <asm/swab.h>. But just in case it
|
|
|
|
|
does, the use of the builtins is optional.
|
|
|
|
|
|
|
|
|
|
Any architecture with load-and-swap or store-and-swap
|
|
|
|
|
instructions should set this. And it shouldn't hurt to set it
|
|
|
|
|
on architectures that don't have such instructions.
|
|
|
|
|
|
2008-03-05 06:28:37 +08:00
|
|
|
|
config KRETPROBES
|
|
|
|
|
def_bool y
|
2022-03-26 10:27:05 +08:00
|
|
|
|
depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK)
|
|
|
|
|
|
|
|
|
|
config KRETPROBE_ON_RETHOOK
|
|
|
|
|
def_bool y
|
|
|
|
|
depends on HAVE_RETHOOK
|
|
|
|
|
depends on KRETPROBES
|
|
|
|
|
select RETHOOK
|
2008-03-05 06:28:37 +08:00
|
|
|
|
|
2009-09-19 14:40:22 +08:00
|
|
|
|
config USER_RETURN_NOTIFIER
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_USER_RETURN_NOTIFIER
|
|
|
|
|
help
|
|
|
|
|
Provide a kernel-internal notification when a cpu is about to
|
|
|
|
|
switch to user mode.
|
|
|
|
|
|
2008-07-24 12:27:05 +08:00
|
|
|
|
config HAVE_IOREMAP_PROT
|
2008-10-16 13:01:38 +08:00
|
|
|
|
bool
|
2008-07-24 12:27:05 +08:00
|
|
|
|
|
2008-02-03 04:10:36 +08:00
|
|
|
|
config HAVE_KPROBES
|
2008-10-16 13:01:38 +08:00
|
|
|
|
bool
|
2008-03-05 06:28:37 +08:00
|
|
|
|
|
|
|
|
|
config HAVE_KRETPROBES
|
2008-10-16 13:01:38 +08:00
|
|
|
|
bool
|
2008-04-29 16:00:30 +08:00
|
|
|
|
|
2010-02-25 21:34:07 +08:00
|
|
|
|
config HAVE_OPTPROBES
|
|
|
|
|
bool
|
2012-03-24 06:01:51 +08:00
|
|
|
|
|
2012-09-28 16:15:20 +08:00
|
|
|
|
config HAVE_KPROBES_ON_FTRACE
|
|
|
|
|
bool
|
|
|
|
|
|
2021-10-25 19:41:52 +08:00
|
|
|
|
config ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Since kretprobes modifies return address on the stack, the
|
|
|
|
|
stacktrace may see the kretprobe trampoline address instead
|
|
|
|
|
of correct one. If the architecture stacktrace code and
|
|
|
|
|
unwinder can adjust such entries, select this configuration.
|
|
|
|
|
|
2018-01-13 01:55:03 +08:00
|
|
|
|
config HAVE_FUNCTION_ERROR_INJECTION
|
2017-12-12 00:36:48 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2016-05-21 08:00:33 +08:00
|
|
|
|
config HAVE_NMI
|
|
|
|
|
bool
|
|
|
|
|
|
2022-02-15 20:41:02 +08:00
|
|
|
|
config HAVE_FUNCTION_DESCRIPTORS
|
|
|
|
|
bool
|
|
|
|
|
|
2021-07-31 13:22:32 +08:00
|
|
|
|
config TRACE_IRQFLAGS_SUPPORT
|
|
|
|
|
bool
|
|
|
|
|
|
2008-07-26 10:45:57 +08:00
|
|
|
|
#
|
|
|
|
|
# An arch should select this if it provides all these things:
|
|
|
|
|
#
|
|
|
|
|
# task_pt_regs() in asm/processor.h or asm/ptrace.h
|
|
|
|
|
# arch_has_single_step() if there is hardware single-step support
|
|
|
|
|
# arch_has_block_step() if there is hardware block-step support
|
|
|
|
|
# asm/syscall.h supplying asm-generic/syscall.h interface
|
|
|
|
|
# linux/regset.h user_regset interfaces
|
|
|
|
|
# CORE_DUMP_USE_REGSET #define'd in linux/elf.h
|
2022-01-28 01:46:37 +08:00
|
|
|
|
# TIF_SYSCALL_TRACE calls ptrace_report_syscall_{entry,exit}
|
2022-02-10 02:20:45 +08:00
|
|
|
|
# TIF_NOTIFY_RESUME calls resume_user_mode_work()
|
2008-07-26 10:45:57 +08:00
|
|
|
|
#
|
|
|
|
|
config HAVE_ARCH_TRACEHOOK
|
2008-10-16 13:01:38 +08:00
|
|
|
|
bool
|
2008-07-26 10:45:57 +08:00
|
|
|
|
|
2011-12-29 20:09:51 +08:00
|
|
|
|
config HAVE_DMA_CONTIGUOUS
|
|
|
|
|
bool
|
|
|
|
|
|
2012-04-20 21:05:45 +08:00
|
|
|
|
config GENERIC_SMP_IDLE_THREAD
|
2019-12-05 08:50:26 +08:00
|
|
|
|
bool
|
2012-04-20 21:05:45 +08:00
|
|
|
|
|
2013-04-25 08:19:13 +08:00
|
|
|
|
config GENERIC_IDLE_POLL_SETUP
|
2019-12-05 08:50:26 +08:00
|
|
|
|
bool
|
2013-04-25 08:19:13 +08:00
|
|
|
|
|
include/linux/string.h: add the option of fortified string.h functions
This adds support for compiling with a rough equivalent to the glibc
_FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
overflow checks for string.h functions when the compiler determines the
size of the source or destination buffer at compile-time. Unlike glibc,
it covers buffer reads in addition to writes.
GNU C __builtin_*_chk intrinsics are avoided because they would force a
much more complex implementation. They aren't designed to detect read
overflows and offer no real benefit when using an implementation based
on inline checks. Inline checks don't add up to much code size and
allow full use of the regular string intrinsics while avoiding the need
for a bunch of _chk functions and per-arch assembly to avoid wrapper
overhead.
This detects various overflows at compile-time in various drivers and
some non-x86 core kernel code. There will likely be issues caught in
regular use at runtime too.
Future improvements left out of initial implementation for simplicity,
as it's all quite optional and can be done incrementally:
* Some of the fortified string functions (strncpy, strcat), don't yet
place a limit on reads from the source based on __builtin_object_size of
the source buffer.
* Extending coverage to more string functions like strlcat.
* It should be possible to optionally use __builtin_object_size(x, 1) for
some functions (C strings) to detect intra-object overflows (like
glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
approach to avoid likely compatibility issues.
* The compile-time checks should be made available via a separate config
option which can be enabled by default (or always enabled) once enough
time has passed to get the issues it catches fixed.
Kees said:
"This is great to have. While it was out-of-tree code, it would have
blocked at least CVE-2016-3858 from being exploitable (improper size
argument to strlcpy()). I've sent a number of fixes for
out-of-bounds-reads that this detected upstream already"
[arnd@arndb.de: x86: fix fortified memcpy]
Link: http://lkml.kernel.org/r/20170627150047.660360-1-arnd@arndb.de
[keescook@chromium.org: avoid panic() in favor of BUG()]
Link: http://lkml.kernel.org/r/20170626235122.GA25261@beast
[keescook@chromium.org: move from -mm, add ARCH_HAS_FORTIFY_SOURCE, tweak Kconfig help]
Link: http://lkml.kernel.org/r/20170526095404.20439-1-danielmicay@gmail.com
Link: http://lkml.kernel.org/r/1497903987-21002-8-git-send-email-keescook@chromium.org
Signed-off-by: Daniel Micay <danielmicay@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-13 05:36:10 +08:00
|
|
|
|
config ARCH_HAS_FORTIFY_SOURCE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this when it can successfully
|
|
|
|
|
build and run with CONFIG_FORTIFY_SOURCE.
|
|
|
|
|
|
2019-05-14 08:18:30 +08:00
|
|
|
|
#
|
|
|
|
|
# Select if the arch provides a historic keepinit alias for the retain_initrd
|
|
|
|
|
# command line option
|
|
|
|
|
#
|
|
|
|
|
config ARCH_HAS_KEEPINITRD
|
|
|
|
|
bool
|
|
|
|
|
|
2017-02-21 23:09:33 +08:00
|
|
|
|
# Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h
|
|
|
|
|
config ARCH_HAS_SET_MEMORY
|
|
|
|
|
bool
|
|
|
|
|
|
2019-04-26 08:11:34 +08:00
|
|
|
|
# Select if arch has all set_direct_map_invalid/default() functions
|
|
|
|
|
config ARCH_HAS_SET_DIRECT_MAP
|
|
|
|
|
bool
|
|
|
|
|
|
2019-06-03 14:43:51 +08:00
|
|
|
|
#
|
2020-02-22 07:55:43 +08:00
|
|
|
|
# Select if the architecture provides the arch_dma_set_uncached symbol to
|
2020-12-15 11:03:44 +08:00
|
|
|
|
# either provide an uncached segment alias for a DMA allocation, or
|
2020-02-22 07:55:43 +08:00
|
|
|
|
# to remap the page tables in place.
|
2019-06-03 14:43:51 +08:00
|
|
|
|
#
|
2020-02-22 07:55:43 +08:00
|
|
|
|
config ARCH_HAS_DMA_SET_UNCACHED
|
2019-06-03 14:43:51 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2020-02-22 04:35:05 +08:00
|
|
|
|
#
|
|
|
|
|
# Select if the architectures provides the arch_dma_clear_uncached symbol
|
|
|
|
|
# to undo an in-place page table remap for uncached access.
|
|
|
|
|
#
|
|
|
|
|
config ARCH_HAS_DMA_CLEAR_UNCACHED
|
2019-06-03 14:43:51 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2018-01-02 23:12:01 +08:00
|
|
|
|
# Select if arch init_task must go in the __init_task_data section
|
|
|
|
|
config ARCH_TASK_STRUCT_ON_STACK
|
2019-12-05 08:50:26 +08:00
|
|
|
|
bool
|
2012-05-03 17:02:48 +08:00
|
|
|
|
|
2012-05-05 23:05:48 +08:00
|
|
|
|
# Select if arch has its private alloc_task_struct() function
|
|
|
|
|
config ARCH_TASK_STRUCT_ALLOCATOR
|
|
|
|
|
bool
|
|
|
|
|
|
2017-08-17 04:00:58 +08:00
|
|
|
|
config HAVE_ARCH_THREAD_STRUCT_WHITELIST
|
|
|
|
|
bool
|
|
|
|
|
depends on !ARCH_TASK_STRUCT_ALLOCATOR
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this to provide hardened usercopy
|
|
|
|
|
knowledge about what region of the thread_struct should be
|
|
|
|
|
whitelisted for copying to userspace. Normally this is only the
|
|
|
|
|
FPU registers. Specifically, arch_thread_struct_whitelist()
|
|
|
|
|
should be implemented. Without this, the entire thread_struct
|
|
|
|
|
field in task_struct will be left whitelisted.
|
|
|
|
|
|
Clarify naming of thread info/stack allocators
We've had the thread info allocated together with the thread stack for
most architectures for a long time (since the thread_info was split off
from the task struct), but that is about to change.
But the patches that move the thread info to be off-stack (and a part of
the task struct instead) made it clear how confused the allocator and
freeing functions are.
Because the common case was that we share an allocation with the thread
stack and the thread_info, the two pointers were identical. That
identity then meant that we would have things like
ti = alloc_thread_info_node(tsk, node);
...
tsk->stack = ti;
which certainly _worked_ (since stack and thread_info have the same
value), but is rather confusing: why are we assigning a thread_info to
the stack? And if we move the thread_info away, the "confusing" code
just gets to be entirely bogus.
So remove all this confusion, and make it clear that we are doing the
stack allocation by renaming and clarifying the function names to be
about the stack. The fact that the thread_info then shares the
allocation is an implementation detail, and not really about the
allocation itself.
This is a pure renaming and type fix: we pass in the same pointer, it's
just that we clarify what the pointer means.
The ia64 code that actually only has one single allocation (for all of
task_struct, thread_info and kernel thread stack) now looks a bit odd,
but since "tsk->stack" is actually not even used there, that oddity
doesn't matter. It would be a separate thing to clean that up, I
intentionally left the ia64 changes as a pure brute-force renaming and
type change.
Acked-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-06-25 06:09:37 +08:00
|
|
|
|
# Select if arch has its private alloc_thread_stack() function
|
|
|
|
|
config ARCH_THREAD_STACK_ALLOCATOR
|
2012-05-05 23:05:48 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2015-07-17 18:28:12 +08:00
|
|
|
|
# Select if arch wants to size task_struct dynamically via arch_task_struct_size:
|
|
|
|
|
config ARCH_WANTS_DYNAMIC_TASK_STRUCT
|
|
|
|
|
bool
|
|
|
|
|
|
2021-06-22 07:18:22 +08:00
|
|
|
|
config ARCH_WANTS_NO_INSTR
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this if the noinstr macro is being used on
|
|
|
|
|
functions to denote that the toolchain should avoid instrumenting such
|
|
|
|
|
functions and is required for correctness.
|
|
|
|
|
|
32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option
All new 32-bit architectures should have 64-bit userspace off_t type, but
existing architectures has 32-bit ones.
To enforce the rule, new config option is added to arch/Kconfig that defaults
ARCH_32BIT_OFF_T to be disabled for new 32-bit architectures. All existing
32-bit architectures enable it explicitly.
New option affects force_o_largefile() behaviour. Namely, if userspace
off_t is 64-bits long, we have no reason to reject user to open big files.
Note that even if architectures has only 64-bit off_t in the kernel
(arc, c6x, h8300, hexagon, nios2, openrisc, and unicore32),
a libc may use 32-bit off_t, and therefore want to limit the file size
to 4GB unless specified differently in the open flags.
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Yury Norov <ynorov@marvell.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-05-16 16:18:49 +08:00
|
|
|
|
config ARCH_32BIT_OFF_T
|
|
|
|
|
bool
|
|
|
|
|
depends on !64BIT
|
|
|
|
|
help
|
|
|
|
|
All new 32-bit architectures should have 64-bit off_t type on
|
|
|
|
|
userspace side which corresponds to the loff_t kernel type. This
|
|
|
|
|
is the requirement for modern ABIs. Some existing architectures
|
|
|
|
|
still support 32-bit off_t. This option is enabled for all such
|
|
|
|
|
architectures explicitly.
|
|
|
|
|
|
2021-02-11 04:51:02 +08:00
|
|
|
|
# Selected by 64 bit architectures which have a 32 bit f_tinode in struct ustat
|
|
|
|
|
config ARCH_32BIT_USTAT_F_TINODE
|
|
|
|
|
bool
|
|
|
|
|
|
2019-08-19 13:54:20 +08:00
|
|
|
|
config HAVE_ASM_MODVERSIONS
|
|
|
|
|
bool
|
|
|
|
|
help
|
2020-12-15 11:03:44 +08:00
|
|
|
|
This symbol should be selected by an architecture if it provides
|
2019-08-19 13:54:20 +08:00
|
|
|
|
<asm/asm-prototypes.h> to support the module versioning for symbols
|
|
|
|
|
exported from assembly code.
|
|
|
|
|
|
2010-02-11 00:25:17 +08:00
|
|
|
|
config HAVE_REGS_AND_STACK_ACCESS_API
|
|
|
|
|
bool
|
2010-02-18 21:25:21 +08:00
|
|
|
|
help
|
2020-12-15 11:03:44 +08:00
|
|
|
|
This symbol should be selected by an architecture if it supports
|
2010-02-18 21:25:21 +08:00
|
|
|
|
the API needed to access registers and stack entries from pt_regs,
|
|
|
|
|
declared in asm/ptrace.h
|
|
|
|
|
For example the kprobes-based event tracer needs this API.
|
2010-02-11 00:25:17 +08:00
|
|
|
|
|
rseq: Introduce restartable sequences system call
Expose a new system call allowing each thread to register one userspace
memory area to be used as an ABI between kernel and user-space for two
purposes: user-space restartable sequences and quick access to read the
current CPU number value from user-space.
* Restartable sequences (per-cpu atomics)
Restartables sequences allow user-space to perform update operations on
per-cpu data without requiring heavy-weight atomic operations.
The restartable critical sections (percpu atomics) work has been started
by Paul Turner and Andrew Hunter. It lets the kernel handle restart of
critical sections. [1] [2] The re-implementation proposed here brings a
few simplifications to the ABI which facilitates porting to other
architectures and speeds up the user-space fast path.
Here are benchmarks of various rseq use-cases.
Test hardware:
arm32: ARMv7 Processor rev 4 (v7l) "Cubietruck", 2-core
x86-64: Intel E5-2630 v3@2.40GHz, 16-core, hyperthreading
The following benchmarks were all performed on a single thread.
* Per-CPU statistic counter increment
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 344.0 31.4 11.0
x86-64: 15.3 2.0 7.7
* LTTng-UST: write event 32-bit header, 32-bit payload into tracer
per-cpu buffer
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 2502.0 2250.0 1.1
x86-64: 117.4 98.0 1.2
* liburcu percpu: lock-unlock pair, dereference, read/compare word
getcpu+atomic (ns/op) rseq (ns/op) speedup
arm32: 751.0 128.5 5.8
x86-64: 53.4 28.6 1.9
* jemalloc memory allocator adapted to use rseq
Using rseq with per-cpu memory pools in jemalloc at Facebook (based on
rseq 2016 implementation):
The production workload response-time has 1-2% gain avg. latency, and
the P99 overall latency drops by 2-3%.
* Reading the current CPU number
Speeding up reading the current CPU number on which the caller thread is
running is done by keeping the current CPU number up do date within the
cpu_id field of the memory area registered by the thread. This is done
by making scheduler preemption set the TIF_NOTIFY_RESUME flag on the
current thread. Upon return to user-space, a notify-resume handler
updates the current CPU value within the registered user-space memory
area. User-space can then read the current CPU number directly from
memory.
Keeping the current cpu id in a memory area shared between kernel and
user-space is an improvement over current mechanisms available to read
the current CPU number, which has the following benefits over
alternative approaches:
- 35x speedup on ARM vs system call through glibc
- 20x speedup on x86 compared to calling glibc, which calls vdso
executing a "lsl" instruction,
- 14x speedup on x86 compared to inlined "lsl" instruction,
- Unlike vdso approaches, this cpu_id value can be read from an inline
assembly, which makes it a useful building block for restartable
sequences.
- The approach of reading the cpu id through memory mapping shared
between kernel and user-space is portable (e.g. ARM), which is not the
case for the lsl-based x86 vdso.
On x86, yet another possible approach would be to use the gs segment
selector to point to user-space per-cpu data. This approach performs
similarly to the cpu id cache, but it has two disadvantages: it is
not portable, and it is incompatible with existing applications already
using the gs segment selector for other purposes.
Benchmarking various approaches for reading the current CPU number:
ARMv7 Processor rev 4 (v7l)
Machine model: Cubietruck
- Baseline (empty loop): 8.4 ns
- Read CPU from rseq cpu_id: 16.7 ns
- Read CPU from rseq cpu_id (lazy register): 19.8 ns
- glibc 2.19-0ubuntu6.6 getcpu: 301.8 ns
- getcpu system call: 234.9 ns
x86-64 Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz:
- Baseline (empty loop): 0.8 ns
- Read CPU from rseq cpu_id: 0.8 ns
- Read CPU from rseq cpu_id (lazy register): 0.8 ns
- Read using gs segment selector: 0.8 ns
- "lsl" inline assembly: 13.0 ns
- glibc 2.19-0ubuntu6 getcpu: 16.6 ns
- getcpu system call: 53.9 ns
- Speed (benchmark taken on v8 of patchset)
Running 10 runs of hackbench -l 100000 seems to indicate, contrary to
expectations, that enabling CONFIG_RSEQ slightly accelerates the
scheduler:
Configuration: 2 sockets * 8-core Intel(R) Xeon(R) CPU E5-2630 v3 @
2.40GHz (directly on hardware, hyperthreading disabled in BIOS, energy
saving disabled in BIOS, turboboost disabled in BIOS, cpuidle.off=1
kernel parameter), with a Linux v4.6 defconfig+localyesconfig,
restartable sequences series applied.
* CONFIG_RSEQ=n
avg.: 41.37 s
std.dev.: 0.36 s
* CONFIG_RSEQ=y
avg.: 40.46 s
std.dev.: 0.33 s
- Size
On x86-64, between CONFIG_RSEQ=n/y, the text size increase of vmlinux is
567 bytes, and the data size increase of vmlinux is 5696 bytes.
[1] https://lwn.net/Articles/650333/
[2] http://www.linuxplumbersconf.org/2013/ocw/system/presentations/1695/original/LPC%20-%20PerCpu%20Atomics.pdf
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Watson <davejwatson@fb.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Chris Lameter <cl@linux.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Andrew Hunter <ahh@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Maurer <bmaurer@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20151027235635.16059.11630.stgit@pjt-glaptop.roam.corp.google.com
Link: http://lkml.kernel.org/r/20150624222609.6116.86035.stgit@kitami.mtv.corp.google.com
Link: https://lkml.kernel.org/r/20180602124408.8430-3-mathieu.desnoyers@efficios.com
2018-06-02 20:43:54 +08:00
|
|
|
|
config HAVE_RSEQ
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_REGS_AND_STACK_ACCESS_API
|
|
|
|
|
help
|
|
|
|
|
This symbol should be selected by an architecture if it
|
|
|
|
|
supports an implementation of restartable sequences.
|
|
|
|
|
|
2018-04-25 20:20:57 +08:00
|
|
|
|
config HAVE_FUNCTION_ARG_ACCESS_API
|
|
|
|
|
bool
|
|
|
|
|
help
|
2020-12-15 11:03:44 +08:00
|
|
|
|
This symbol should be selected by an architecture if it supports
|
2018-04-25 20:20:57 +08:00
|
|
|
|
the API needed to access function arguments from pt_regs,
|
|
|
|
|
declared in asm/ptrace.h
|
|
|
|
|
|
2009-06-02 02:13:33 +08:00
|
|
|
|
config HAVE_HW_BREAKPOINT
|
|
|
|
|
bool
|
2009-12-17 08:33:54 +08:00
|
|
|
|
depends on PERF_EVENTS
|
2009-06-02 02:13:33 +08:00
|
|
|
|
|
2010-04-12 00:55:56 +08:00
|
|
|
|
config HAVE_MIXED_BREAKPOINTS_REGS
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_HW_BREAKPOINT
|
|
|
|
|
help
|
|
|
|
|
Depending on the arch implementation of hardware breakpoints,
|
|
|
|
|
some of them have separate registers for data and instruction
|
|
|
|
|
breakpoints addresses, others have mixed registers to store
|
|
|
|
|
them but define the access type in a control register.
|
|
|
|
|
Select this option if your arch implements breakpoints under the
|
|
|
|
|
latter fashion.
|
|
|
|
|
|
2009-09-19 14:40:22 +08:00
|
|
|
|
config HAVE_USER_RETURN_NOTIFIER
|
|
|
|
|
bool
|
2009-09-07 14:19:51 +08:00
|
|
|
|
|
2010-05-16 04:57:48 +08:00
|
|
|
|
config HAVE_PERF_EVENTS_NMI
|
|
|
|
|
bool
|
2010-05-16 05:15:20 +08:00
|
|
|
|
help
|
|
|
|
|
System hardware can generate an NMI using the perf event
|
|
|
|
|
subsystem. Also has support for calculating CPU cycle events
|
|
|
|
|
to determine how many clock cycles in a given period.
|
2010-05-16 04:57:48 +08:00
|
|
|
|
|
2017-07-13 05:35:46 +08:00
|
|
|
|
config HAVE_HARDLOCKUP_DETECTOR_PERF
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_PERF_EVENTS_NMI
|
|
|
|
|
help
|
|
|
|
|
The arch chooses to use the generic perf-NMI-based hardlockup
|
|
|
|
|
detector. Must define HAVE_PERF_EVENTS_NMI.
|
|
|
|
|
|
|
|
|
|
config HAVE_NMI_WATCHDOG
|
|
|
|
|
depends on HAVE_NMI
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
The arch provides a low level NMI watchdog. It provides
|
|
|
|
|
asm/nmi.h, and defines its own arch_touch_nmi_watchdog().
|
|
|
|
|
|
|
|
|
|
config HAVE_HARDLOCKUP_DETECTOR_ARCH
|
|
|
|
|
bool
|
|
|
|
|
select HAVE_NMI_WATCHDOG
|
|
|
|
|
help
|
|
|
|
|
The arch chooses to provide its own hardlockup detector, which is
|
|
|
|
|
a superset of the HAVE_NMI_WATCHDOG. It also conforms to config
|
|
|
|
|
interfaces and parameters provided by hardlockup detector subsystem.
|
|
|
|
|
|
2012-08-07 21:20:36 +08:00
|
|
|
|
config HAVE_PERF_REGS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Support selective register dumps for perf events. This includes
|
|
|
|
|
bit-mapping of each registers and a unique architecture id.
|
|
|
|
|
|
2012-08-07 21:20:40 +08:00
|
|
|
|
config HAVE_PERF_USER_STACK_DUMP
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Support user stack dumps for perf event samples. This needs
|
|
|
|
|
access to the user stack pointer which is not unified across
|
|
|
|
|
architectures.
|
|
|
|
|
|
2010-09-17 23:09:00 +08:00
|
|
|
|
config HAVE_ARCH_JUMP_LABEL
|
|
|
|
|
bool
|
|
|
|
|
|
2018-09-19 14:51:37 +08:00
|
|
|
|
config HAVE_ARCH_JUMP_LABEL_RELATIVE
|
|
|
|
|
bool
|
|
|
|
|
|
2020-02-04 09:37:11 +08:00
|
|
|
|
config MMU_GATHER_TABLE_FREE
|
|
|
|
|
bool
|
|
|
|
|
|
2020-02-04 09:37:02 +08:00
|
|
|
|
config MMU_GATHER_RCU_TABLE_FREE
|
2011-05-25 08:12:00 +08:00
|
|
|
|
bool
|
2020-02-04 09:37:11 +08:00
|
|
|
|
select MMU_GATHER_TABLE_FREE
|
2011-05-25 08:12:00 +08:00
|
|
|
|
|
2020-02-04 09:37:05 +08:00
|
|
|
|
config MMU_GATHER_PAGE_SIZE
|
2018-08-31 20:46:08 +08:00
|
|
|
|
bool
|
|
|
|
|
|
2020-02-04 09:36:59 +08:00
|
|
|
|
config MMU_GATHER_NO_RANGE
|
|
|
|
|
bool
|
|
|
|
|
|
2020-02-04 09:37:08 +08:00
|
|
|
|
config MMU_GATHER_NO_GATHER
|
2018-09-18 20:51:50 +08:00
|
|
|
|
bool
|
2020-02-04 09:37:11 +08:00
|
|
|
|
depends on MMU_GATHER_TABLE_FREE
|
2018-09-18 20:51:50 +08:00
|
|
|
|
|
mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
Reading and modifying current->mm and current->active_mm and switching
mm should be done with irqs off, to prevent races seeing an intermediate
state.
This is similar to commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB
invalidate"). At exec-time when the new mm is activated, the old one
should usually be single-threaded and no longer used, unless something
else is holding an mm_users reference (which may be possible).
Absent other mm_users, there is also a race with preemption and lazy tlb
switching. Consider the kernel_execve case where the current thread is
using a lazy tlb active mm:
call_usermodehelper()
kernel_execve()
old_mm = current->mm;
active_mm = current->active_mm;
*** preempt *** --------------------> schedule()
prev->active_mm = NULL;
mmdrop(prev active_mm);
...
<-------------------- schedule()
current->mm = mm;
current->active_mm = mm;
if (!old_mm)
mmdrop(active_mm);
If we switch back to the kernel thread from a different mm, there is a
double free of the old active_mm, and a missing free of the new one.
Closing this race only requires interrupts to be disabled while ->mm
and ->active_mm are being switched, but the TLB problem requires also
holding interrupts off over activate_mm. Unfortunately not all archs
can do that yet, e.g., arm defers the switch if irqs are disabled and
expects finish_arch_post_lock_switch() to be called to complete the
flush; um takes a blocking lock in activate_mm().
So as a first step, disable interrupts across the mm/active_mm updates
to close the lazy tlb preempt race, and provide an arch option to
extend that to activate_mm which allows architectures doing IPI based
TLB shootdowns to close the second race.
This is a bit ugly, but in the interest of fixing the bug and backporting
before all architectures are converted this is a compromise.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200914045219.3736466-2-npiggin@gmail.com
2020-09-14 12:52:16 +08:00
|
|
|
|
config ARCH_WANT_IRQS_OFF_ACTIVATE_MM
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Temporary select until all architectures can be converted to have
|
|
|
|
|
irqs disabled over activate_mm. Architectures that do IPI based TLB
|
|
|
|
|
shootdowns should enable this.
|
|
|
|
|
|
2011-07-13 13:14:22 +08:00
|
|
|
|
config ARCH_HAVE_NMI_SAFE_CMPXCHG
|
|
|
|
|
bool
|
|
|
|
|
|
2012-01-13 09:17:27 +08:00
|
|
|
|
config HAVE_ALIGNED_STRUCT_PAGE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
This makes sure that struct pages are double word aligned and that
|
|
|
|
|
e.g. the SLUB allocator can perform double word atomic operations
|
|
|
|
|
on a struct page for better performance. However selecting this
|
|
|
|
|
might increase the size of a struct page by a word.
|
|
|
|
|
|
2012-01-13 09:17:30 +08:00
|
|
|
|
config HAVE_CMPXCHG_LOCAL
|
|
|
|
|
bool
|
|
|
|
|
|
2012-01-13 09:17:33 +08:00
|
|
|
|
config HAVE_CMPXCHG_DOUBLE
|
|
|
|
|
bool
|
|
|
|
|
|
2017-01-15 05:32:50 +08:00
|
|
|
|
config ARCH_WEAK_RELEASE_ACQUIRE
|
|
|
|
|
bool
|
|
|
|
|
|
2012-07-31 05:42:46 +08:00
|
|
|
|
config ARCH_WANT_IPC_PARSE_VERSION
|
|
|
|
|
bool
|
|
|
|
|
|
|
|
|
|
config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
|
|
|
|
|
bool
|
|
|
|
|
|
[PATCH v3] ipc: provide generic compat versions of IPC syscalls
When using the "compat" APIs, architectures will generally want to
be able to make direct syscalls to msgsnd(), shmctl(), etc., and
in the kernel we would want them to be handled directly by
compat_sys_xxx() functions, as is true for other compat syscalls.
However, for historical reasons, several of the existing compat IPC
syscalls do not do this. semctl() expects a pointer to the fourth
argument, instead of the fourth argument itself. msgsnd(), msgrcv()
and shmat() expect arguments in different order.
This change adds an ARCH_WANT_OLD_COMPAT_IPC config option that can be
set to preserve this behavior for ports that use it (x86, sparc, powerpc,
s390, and mips). No actual semantics are changed for those architectures,
and there is only a minimal amount of code refactoring in ipc/compat.c.
Newer architectures like tile (and perhaps future architectures such
as arm64 and unicore64) should not select this option, and thus can
avoid having any IPC-specific code at all in their architecture-specific
compat layer. In the same vein, if this option is not selected, IPC_64
mode is assumed, since that's what the <asm-generic> headers expect.
The workaround code in "tile" for msgsnd() and msgrcv() is removed
with this change; it also fixes the bug that shmat() and semctl() were
not being properly handled.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2012-03-16 01:13:38 +08:00
|
|
|
|
config ARCH_WANT_OLD_COMPAT_IPC
|
2012-07-31 05:42:46 +08:00
|
|
|
|
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
|
[PATCH v3] ipc: provide generic compat versions of IPC syscalls
When using the "compat" APIs, architectures will generally want to
be able to make direct syscalls to msgsnd(), shmctl(), etc., and
in the kernel we would want them to be handled directly by
compat_sys_xxx() functions, as is true for other compat syscalls.
However, for historical reasons, several of the existing compat IPC
syscalls do not do this. semctl() expects a pointer to the fourth
argument, instead of the fourth argument itself. msgsnd(), msgrcv()
and shmat() expect arguments in different order.
This change adds an ARCH_WANT_OLD_COMPAT_IPC config option that can be
set to preserve this behavior for ports that use it (x86, sparc, powerpc,
s390, and mips). No actual semantics are changed for those architectures,
and there is only a minimal amount of code refactoring in ipc/compat.c.
Newer architectures like tile (and perhaps future architectures such
as arm64 and unicore64) should not select this option, and thus can
avoid having any IPC-specific code at all in their architecture-specific
compat layer. In the same vein, if this option is not selected, IPC_64
mode is assumed, since that's what the <asm-generic> headers expect.
The workaround code in "tile" for msgsnd() and msgrcv() is removed
with this change; it also fixes the bug that shmat() and semctl() were
not being properly handled.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2012-03-16 01:13:38 +08:00
|
|
|
|
bool
|
|
|
|
|
|
seccomp: Move config option SECCOMP to arch/Kconfig
In order to make adding configurable features into seccomp easier,
it's better to have the options at one single location, considering
especially that the bulk of seccomp code is arch-independent. An quick
look also show that many SECCOMP descriptions are outdated; they talk
about /proc rather than prctl.
As a result of moving the config option and keeping it default on,
architectures arm, arm64, csky, riscv, sh, and xtensa did not have SECCOMP
on by default prior to this and SECCOMP will be default in this change.
Architectures microblaze, mips, powerpc, s390, sh, and sparc have an
outdated depend on PROC_FS and this dependency is removed in this change.
Suggested-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez1YWz9cnp08UZgeieYRhHdqh-ch7aNwc4JRBnGyrmgfMg@mail.gmail.com/
Signed-off-by: YiFei Zhu <yifeifz2@illinois.edu>
[kees: added HAVE_ARCH_SECCOMP help text, tweaked wording]
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/9ede6ef35c847e58d61e476c6a39540520066613.1600951211.git.yifeifz2@illinois.edu
2020-09-24 20:44:16 +08:00
|
|
|
|
config HAVE_ARCH_SECCOMP
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol to support seccomp mode 1 (the fixed
|
|
|
|
|
syscall policy), and must provide an overrides for __NR_seccomp_sigreturn,
|
|
|
|
|
and compat syscalls if the asm-generic/seccomp.h defaults need adjustment:
|
|
|
|
|
- __NR_seccomp_read_32
|
|
|
|
|
- __NR_seccomp_write_32
|
|
|
|
|
- __NR_seccomp_exit_32
|
|
|
|
|
- __NR_seccomp_sigreturn_32
|
|
|
|
|
|
seccomp: add system call filtering using BPF
[This patch depends on luto@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
The whole series including Andrew's patches can be found here:
https://github.com/redpig/linux/tree/seccomp
Complete diff here:
https://github.com/redpig/linux/compare/1dc65fed...seccomp
]
This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.
A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);
The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.
Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).
There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.
Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.
No architectures are enabled in this patch.
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
v18: - rebase to v3.4-rc2
- s/chk/check/ (akpm@linux-foundation.org,jmorris@namei.org)
- allocate with GFP_KERNEL|__GFP_NOWARN (indan@nul.nu)
- add a comment for get_u32 regarding endianness (akpm@)
- fix other typos, style mistakes (akpm@)
- added acked-by
v17: - properly guard seccomp filter needed headers (leann@ubuntu.com)
- tighten return mask to 0x7fff0000
v16: - no change
v15: - add a 4 instr penalty when counting a path to account for seccomp_filter
size (indan@nul.nu)
- drop the max insns to 256KB (indan@nul.nu)
- return ENOMEM if the max insns limit has been hit (indan@nul.nu)
- move IP checks after args (indan@nul.nu)
- drop !user_filter check (indan@nul.nu)
- only allow explicit bpf codes (indan@nul.nu)
- exit_code -> exit_sig
v14: - put/get_seccomp_filter takes struct task_struct
(indan@nul.nu,keescook@chromium.org)
- adds seccomp_chk_filter and drops general bpf_run/chk_filter user
- add seccomp_bpf_load for use by net/core/filter.c
- lower max per-process/per-hierarchy: 1MB
- moved nnp/capability check prior to allocation
(all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
- removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
- reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
- do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(hpa@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(hpa@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of indan@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (luto@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
- cleaned up Kconfig (amwang@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(mcgrathr@chromium.org, indan@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (eparis@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(luto@mit.edu,torvalds@linux-foundation.com)
- assign names to seccomp modes (rdunlap@xenotime.net)
- fix style issues (rdunlap@xenotime.net)
- reworded Kconfig entry (rdunlap@xenotime.net)
v3: - macros to inline (oleg@redhat.com)
- init_task behavior fixed (oleg@redhat.com)
- drop creator entry and extra NULL check (oleg@redhat.com)
- alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
- adds tentative use of "always_unprivileged" as per
torvalds@linux-foundation.org and luto@mit.edu
v2: - (patch 2 only)
Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-04-13 05:47:57 +08:00
|
|
|
|
config HAVE_ARCH_SECCOMP_FILTER
|
|
|
|
|
bool
|
seccomp: Move config option SECCOMP to arch/Kconfig
In order to make adding configurable features into seccomp easier,
it's better to have the options at one single location, considering
especially that the bulk of seccomp code is arch-independent. An quick
look also show that many SECCOMP descriptions are outdated; they talk
about /proc rather than prctl.
As a result of moving the config option and keeping it default on,
architectures arm, arm64, csky, riscv, sh, and xtensa did not have SECCOMP
on by default prior to this and SECCOMP will be default in this change.
Architectures microblaze, mips, powerpc, s390, sh, and sparc have an
outdated depend on PROC_FS and this dependency is removed in this change.
Suggested-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez1YWz9cnp08UZgeieYRhHdqh-ch7aNwc4JRBnGyrmgfMg@mail.gmail.com/
Signed-off-by: YiFei Zhu <yifeifz2@illinois.edu>
[kees: added HAVE_ARCH_SECCOMP help text, tweaked wording]
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/9ede6ef35c847e58d61e476c6a39540520066613.1600951211.git.yifeifz2@illinois.edu
2020-09-24 20:44:16 +08:00
|
|
|
|
select HAVE_ARCH_SECCOMP
|
seccomp: add system call filtering using BPF
[This patch depends on luto@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
The whole series including Andrew's patches can be found here:
https://github.com/redpig/linux/tree/seccomp
Complete diff here:
https://github.com/redpig/linux/compare/1dc65fed...seccomp
]
This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.
A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);
The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.
Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).
There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.
Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.
No architectures are enabled in this patch.
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
v18: - rebase to v3.4-rc2
- s/chk/check/ (akpm@linux-foundation.org,jmorris@namei.org)
- allocate with GFP_KERNEL|__GFP_NOWARN (indan@nul.nu)
- add a comment for get_u32 regarding endianness (akpm@)
- fix other typos, style mistakes (akpm@)
- added acked-by
v17: - properly guard seccomp filter needed headers (leann@ubuntu.com)
- tighten return mask to 0x7fff0000
v16: - no change
v15: - add a 4 instr penalty when counting a path to account for seccomp_filter
size (indan@nul.nu)
- drop the max insns to 256KB (indan@nul.nu)
- return ENOMEM if the max insns limit has been hit (indan@nul.nu)
- move IP checks after args (indan@nul.nu)
- drop !user_filter check (indan@nul.nu)
- only allow explicit bpf codes (indan@nul.nu)
- exit_code -> exit_sig
v14: - put/get_seccomp_filter takes struct task_struct
(indan@nul.nu,keescook@chromium.org)
- adds seccomp_chk_filter and drops general bpf_run/chk_filter user
- add seccomp_bpf_load for use by net/core/filter.c
- lower max per-process/per-hierarchy: 1MB
- moved nnp/capability check prior to allocation
(all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
- removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
- reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
- do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(hpa@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(hpa@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of indan@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (luto@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
- cleaned up Kconfig (amwang@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(mcgrathr@chromium.org, indan@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (eparis@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(luto@mit.edu,torvalds@linux-foundation.com)
- assign names to seccomp modes (rdunlap@xenotime.net)
- fix style issues (rdunlap@xenotime.net)
- reworded Kconfig entry (rdunlap@xenotime.net)
v3: - macros to inline (oleg@redhat.com)
- init_task behavior fixed (oleg@redhat.com)
- drop creator entry and extra NULL check (oleg@redhat.com)
- alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
- adds tentative use of "always_unprivileged" as per
torvalds@linux-foundation.org and luto@mit.edu
v2: - (patch 2 only)
Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-04-13 05:47:57 +08:00
|
|
|
|
help
|
2012-04-13 05:48:02 +08:00
|
|
|
|
An arch should select this symbol if it provides all of these things:
|
seccomp: Move config option SECCOMP to arch/Kconfig
In order to make adding configurable features into seccomp easier,
it's better to have the options at one single location, considering
especially that the bulk of seccomp code is arch-independent. An quick
look also show that many SECCOMP descriptions are outdated; they talk
about /proc rather than prctl.
As a result of moving the config option and keeping it default on,
architectures arm, arm64, csky, riscv, sh, and xtensa did not have SECCOMP
on by default prior to this and SECCOMP will be default in this change.
Architectures microblaze, mips, powerpc, s390, sh, and sparc have an
outdated depend on PROC_FS and this dependency is removed in this change.
Suggested-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez1YWz9cnp08UZgeieYRhHdqh-ch7aNwc4JRBnGyrmgfMg@mail.gmail.com/
Signed-off-by: YiFei Zhu <yifeifz2@illinois.edu>
[kees: added HAVE_ARCH_SECCOMP help text, tweaked wording]
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/9ede6ef35c847e58d61e476c6a39540520066613.1600951211.git.yifeifz2@illinois.edu
2020-09-24 20:44:16 +08:00
|
|
|
|
- all the requirements for HAVE_ARCH_SECCOMP
|
2012-04-13 05:48:01 +08:00
|
|
|
|
- syscall_get_arch()
|
|
|
|
|
- syscall_get_arguments()
|
|
|
|
|
- syscall_rollback()
|
|
|
|
|
- syscall_set_return_value()
|
2012-04-13 05:48:02 +08:00
|
|
|
|
- SIGSYS siginfo_t support
|
|
|
|
|
- secure_computing is called from a ptrace_event()-safe context
|
|
|
|
|
- secure_computing return value is checked and a return value of -1
|
|
|
|
|
results in the system call being skipped immediately.
|
2014-06-26 07:08:24 +08:00
|
|
|
|
- seccomp syscall wired up
|
2020-11-11 21:33:54 +08:00
|
|
|
|
- if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE,
|
|
|
|
|
SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If
|
|
|
|
|
COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too.
|
seccomp: add system call filtering using BPF
[This patch depends on luto@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
The whole series including Andrew's patches can be found here:
https://github.com/redpig/linux/tree/seccomp
Complete diff here:
https://github.com/redpig/linux/compare/1dc65fed...seccomp
]
This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.
A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);
The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.
Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).
There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.
Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.
No architectures are enabled in this patch.
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
v18: - rebase to v3.4-rc2
- s/chk/check/ (akpm@linux-foundation.org,jmorris@namei.org)
- allocate with GFP_KERNEL|__GFP_NOWARN (indan@nul.nu)
- add a comment for get_u32 regarding endianness (akpm@)
- fix other typos, style mistakes (akpm@)
- added acked-by
v17: - properly guard seccomp filter needed headers (leann@ubuntu.com)
- tighten return mask to 0x7fff0000
v16: - no change
v15: - add a 4 instr penalty when counting a path to account for seccomp_filter
size (indan@nul.nu)
- drop the max insns to 256KB (indan@nul.nu)
- return ENOMEM if the max insns limit has been hit (indan@nul.nu)
- move IP checks after args (indan@nul.nu)
- drop !user_filter check (indan@nul.nu)
- only allow explicit bpf codes (indan@nul.nu)
- exit_code -> exit_sig
v14: - put/get_seccomp_filter takes struct task_struct
(indan@nul.nu,keescook@chromium.org)
- adds seccomp_chk_filter and drops general bpf_run/chk_filter user
- add seccomp_bpf_load for use by net/core/filter.c
- lower max per-process/per-hierarchy: 1MB
- moved nnp/capability check prior to allocation
(all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
- removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
- reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
- do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(hpa@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(hpa@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of indan@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (luto@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
- cleaned up Kconfig (amwang@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(mcgrathr@chromium.org, indan@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (eparis@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(luto@mit.edu,torvalds@linux-foundation.com)
- assign names to seccomp modes (rdunlap@xenotime.net)
- fix style issues (rdunlap@xenotime.net)
- reworded Kconfig entry (rdunlap@xenotime.net)
v3: - macros to inline (oleg@redhat.com)
- init_task behavior fixed (oleg@redhat.com)
- drop creator entry and extra NULL check (oleg@redhat.com)
- alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
- adds tentative use of "always_unprivileged" as per
torvalds@linux-foundation.org and luto@mit.edu
v2: - (patch 2 only)
Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-04-13 05:47:57 +08:00
|
|
|
|
|
seccomp: Move config option SECCOMP to arch/Kconfig
In order to make adding configurable features into seccomp easier,
it's better to have the options at one single location, considering
especially that the bulk of seccomp code is arch-independent. An quick
look also show that many SECCOMP descriptions are outdated; they talk
about /proc rather than prctl.
As a result of moving the config option and keeping it default on,
architectures arm, arm64, csky, riscv, sh, and xtensa did not have SECCOMP
on by default prior to this and SECCOMP will be default in this change.
Architectures microblaze, mips, powerpc, s390, sh, and sparc have an
outdated depend on PROC_FS and this dependency is removed in this change.
Suggested-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez1YWz9cnp08UZgeieYRhHdqh-ch7aNwc4JRBnGyrmgfMg@mail.gmail.com/
Signed-off-by: YiFei Zhu <yifeifz2@illinois.edu>
[kees: added HAVE_ARCH_SECCOMP help text, tweaked wording]
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/9ede6ef35c847e58d61e476c6a39540520066613.1600951211.git.yifeifz2@illinois.edu
2020-09-24 20:44:16 +08:00
|
|
|
|
config SECCOMP
|
|
|
|
|
prompt "Enable seccomp to safely execute untrusted bytecode"
|
|
|
|
|
def_bool y
|
|
|
|
|
depends on HAVE_ARCH_SECCOMP
|
|
|
|
|
help
|
|
|
|
|
This kernel feature is useful for number crunching applications
|
|
|
|
|
that may need to handle untrusted bytecode during their
|
|
|
|
|
execution. By using pipes or other transports made available
|
|
|
|
|
to the process as file descriptors supporting the read/write
|
|
|
|
|
syscalls, it's possible to isolate those applications in their
|
|
|
|
|
own address space using seccomp. Once seccomp is enabled via
|
|
|
|
|
prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be
|
|
|
|
|
disabled and the task is only allowed to execute a few safe
|
|
|
|
|
syscalls defined by each seccomp mode.
|
|
|
|
|
|
|
|
|
|
If unsure, say Y.
|
|
|
|
|
|
seccomp: add system call filtering using BPF
[This patch depends on luto@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
The whole series including Andrew's patches can be found here:
https://github.com/redpig/linux/tree/seccomp
Complete diff here:
https://github.com/redpig/linux/compare/1dc65fed...seccomp
]
This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.
A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);
The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.
Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).
There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.
Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.
No architectures are enabled in this patch.
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
v18: - rebase to v3.4-rc2
- s/chk/check/ (akpm@linux-foundation.org,jmorris@namei.org)
- allocate with GFP_KERNEL|__GFP_NOWARN (indan@nul.nu)
- add a comment for get_u32 regarding endianness (akpm@)
- fix other typos, style mistakes (akpm@)
- added acked-by
v17: - properly guard seccomp filter needed headers (leann@ubuntu.com)
- tighten return mask to 0x7fff0000
v16: - no change
v15: - add a 4 instr penalty when counting a path to account for seccomp_filter
size (indan@nul.nu)
- drop the max insns to 256KB (indan@nul.nu)
- return ENOMEM if the max insns limit has been hit (indan@nul.nu)
- move IP checks after args (indan@nul.nu)
- drop !user_filter check (indan@nul.nu)
- only allow explicit bpf codes (indan@nul.nu)
- exit_code -> exit_sig
v14: - put/get_seccomp_filter takes struct task_struct
(indan@nul.nu,keescook@chromium.org)
- adds seccomp_chk_filter and drops general bpf_run/chk_filter user
- add seccomp_bpf_load for use by net/core/filter.c
- lower max per-process/per-hierarchy: 1MB
- moved nnp/capability check prior to allocation
(all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
- removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
- reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
- do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(hpa@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(hpa@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of indan@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (luto@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
- cleaned up Kconfig (amwang@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(mcgrathr@chromium.org, indan@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (eparis@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(luto@mit.edu,torvalds@linux-foundation.com)
- assign names to seccomp modes (rdunlap@xenotime.net)
- fix style issues (rdunlap@xenotime.net)
- reworded Kconfig entry (rdunlap@xenotime.net)
v3: - macros to inline (oleg@redhat.com)
- init_task behavior fixed (oleg@redhat.com)
- drop creator entry and extra NULL check (oleg@redhat.com)
- alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
- adds tentative use of "always_unprivileged" as per
torvalds@linux-foundation.org and luto@mit.edu
v2: - (patch 2 only)
Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-04-13 05:47:57 +08:00
|
|
|
|
config SECCOMP_FILTER
|
|
|
|
|
def_bool y
|
|
|
|
|
depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
|
|
|
|
|
help
|
|
|
|
|
Enable tasks to build secure computing environments defined
|
|
|
|
|
in terms of Berkeley Packet Filter programs which implement
|
|
|
|
|
task-defined system call filtering polices.
|
|
|
|
|
|
2018-05-09 02:14:57 +08:00
|
|
|
|
See Documentation/userspace-api/seccomp_filter.rst for details.
|
seccomp: add system call filtering using BPF
[This patch depends on luto@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
The whole series including Andrew's patches can be found here:
https://github.com/redpig/linux/tree/seccomp
Complete diff here:
https://github.com/redpig/linux/compare/1dc65fed...seccomp
]
This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.
A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);
The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.
Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).
There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.
Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.
No architectures are enabled in this patch.
Signed-off-by: Will Drewry <wad@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Reviewed-by: Indan Zupancic <indan@nul.nu>
Acked-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
v18: - rebase to v3.4-rc2
- s/chk/check/ (akpm@linux-foundation.org,jmorris@namei.org)
- allocate with GFP_KERNEL|__GFP_NOWARN (indan@nul.nu)
- add a comment for get_u32 regarding endianness (akpm@)
- fix other typos, style mistakes (akpm@)
- added acked-by
v17: - properly guard seccomp filter needed headers (leann@ubuntu.com)
- tighten return mask to 0x7fff0000
v16: - no change
v15: - add a 4 instr penalty when counting a path to account for seccomp_filter
size (indan@nul.nu)
- drop the max insns to 256KB (indan@nul.nu)
- return ENOMEM if the max insns limit has been hit (indan@nul.nu)
- move IP checks after args (indan@nul.nu)
- drop !user_filter check (indan@nul.nu)
- only allow explicit bpf codes (indan@nul.nu)
- exit_code -> exit_sig
v14: - put/get_seccomp_filter takes struct task_struct
(indan@nul.nu,keescook@chromium.org)
- adds seccomp_chk_filter and drops general bpf_run/chk_filter user
- add seccomp_bpf_load for use by net/core/filter.c
- lower max per-process/per-hierarchy: 1MB
- moved nnp/capability check prior to allocation
(all of the above: indan@nul.nu)
v13: - rebase on to 88ebdda6159ffc15699f204c33feb3e431bf9bdc
v12: - added a maximum instruction count per path (indan@nul.nu,oleg@redhat.com)
- removed copy_seccomp (keescook@chromium.org,indan@nul.nu)
- reworded the prctl_set_seccomp comment (indan@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (hpa@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (indan@nul.nu)
- do_exit(SIGSYS) (keescook@chromium.org, luto@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(hpa@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(hpa@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of indan@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (luto@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (amwang@redhat.com)
- cleaned up Kconfig (amwang@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(indan@nul.nu,oleg@redhat.com, mcgrathr@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(mcgrathr@chromium.org, indan@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (eparis@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(luto@mit.edu,torvalds@linux-foundation.com)
- assign names to seccomp modes (rdunlap@xenotime.net)
- fix style issues (rdunlap@xenotime.net)
- reworded Kconfig entry (rdunlap@xenotime.net)
v3: - macros to inline (oleg@redhat.com)
- init_task behavior fixed (oleg@redhat.com)
- drop creator entry and extra NULL check (oleg@redhat.com)
- alloc returns -EINVAL on bad sizing (serge.hallyn@canonical.com)
- adds tentative use of "always_unprivileged" as per
torvalds@linux-foundation.org and luto@mit.edu
v2: - (patch 2 only)
Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-04-13 05:47:57 +08:00
|
|
|
|
|
2020-11-11 21:33:54 +08:00
|
|
|
|
config SECCOMP_CACHE_DEBUG
|
|
|
|
|
bool "Show seccomp filter cache status in /proc/pid/seccomp_cache"
|
|
|
|
|
depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR
|
|
|
|
|
depends on PROC_FS
|
|
|
|
|
help
|
|
|
|
|
This enables the /proc/pid/seccomp_cache interface to monitor
|
|
|
|
|
seccomp cache data. The file format is subject to change. Reading
|
|
|
|
|
the file requires CAP_SYS_ADMIN.
|
|
|
|
|
|
|
|
|
|
This option is for debugging only. Enabling presents the risk that
|
|
|
|
|
an adversary may be able to infer the seccomp filter logic.
|
|
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
|
2018-08-17 06:16:58 +08:00
|
|
|
|
config HAVE_ARCH_STACKLEAK
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this if it has the code which
|
|
|
|
|
fills the used part of the kernel stack with the STACKLEAK_POISON
|
|
|
|
|
value before returning from system calls.
|
|
|
|
|
|
2018-06-14 18:36:45 +08:00
|
|
|
|
config HAVE_STACKPROTECTOR
|
2013-12-20 03:35:58 +08:00
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if:
|
|
|
|
|
- it has implemented a stack canary (e.g. __stack_chk_guard)
|
|
|
|
|
|
Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.
That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.
HOWEVER.
It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y
and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.
The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).
This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 11:21:18 +08:00
|
|
|
|
config STACKPROTECTOR
|
stack-protector: test compiler capability in Kconfig and drop AUTO mode
Move the test for -fstack-protector(-strong) option to Kconfig.
If the compiler does not support the option, the corresponding menu
is automatically hidden. If STRONG is not supported, it will fall
back to REGULAR. If REGULAR is not supported, it will be disabled.
This means, AUTO is implicitly handled by the dependency solver of
Kconfig, hence removed.
I also turned the 'choice' into only two boolean symbols. The use of
'choice' is not a good idea here, because all of all{yes,mod,no}config
would choose the first visible value, while we want allnoconfig to
disable as many features as possible.
X86 has additional shell scripts in case the compiler supports those
options, but generates broken code. I added CC_HAS_SANE_STACKPROTECTOR
to test this. I had to add -m32 to gcc-x86_32-has-stack-protector.sh
to make it work correctly.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
2018-05-28 17:22:00 +08:00
|
|
|
|
bool "Stack Protector buffer overflow detection"
|
2018-06-14 18:36:45 +08:00
|
|
|
|
depends on HAVE_STACKPROTECTOR
|
stack-protector: test compiler capability in Kconfig and drop AUTO mode
Move the test for -fstack-protector(-strong) option to Kconfig.
If the compiler does not support the option, the corresponding menu
is automatically hidden. If STRONG is not supported, it will fall
back to REGULAR. If REGULAR is not supported, it will be disabled.
This means, AUTO is implicitly handled by the dependency solver of
Kconfig, hence removed.
I also turned the 'choice' into only two boolean symbols. The use of
'choice' is not a good idea here, because all of all{yes,mod,no}config
would choose the first visible value, while we want allnoconfig to
disable as many features as possible.
X86 has additional shell scripts in case the compiler supports those
options, but generates broken code. I added CC_HAS_SANE_STACKPROTECTOR
to test this. I had to add -m32 to gcc-x86_32-has-stack-protector.sh
to make it work correctly.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
2018-05-28 17:22:00 +08:00
|
|
|
|
depends on $(cc-option,-fstack-protector)
|
|
|
|
|
default y
|
2013-12-20 03:35:58 +08:00
|
|
|
|
help
|
stackprotector: Introduce CONFIG_CC_STACKPROTECTOR_STRONG
This changes the stack protector config option into a choice of
"None", "Regular", and "Strong":
CONFIG_CC_STACKPROTECTOR_NONE
CONFIG_CC_STACKPROTECTOR_REGULAR
CONFIG_CC_STACKPROTECTOR_STRONG
"Regular" means the old CONFIG_CC_STACKPROTECTOR=y option.
"Strong" is a new mode introduced by this patch. With "Strong" the
kernel is built with -fstack-protector-strong (available in
gcc 4.9 and later). This option increases the coverage of the stack
protector without the heavy performance hit of -fstack-protector-all.
For reference, the stack protector options available in gcc are:
-fstack-protector-all:
Adds the stack-canary saving prefix and stack-canary checking
suffix to _all_ function entry and exit. Results in substantial
use of stack space for saving the canary for deep stack users
(e.g. historically xfs), and measurable (though shockingly still
low) performance hit due to all the saving/checking. Really not
suitable for sane systems, and was entirely removed as an option
from the kernel many years ago.
-fstack-protector:
Adds the canary save/check to functions that define an 8
(--param=ssp-buffer-size=N, N=8 by default) or more byte local
char array. Traditionally, stack overflows happened with
string-based manipulations, so this was a way to find those
functions. Very few total functions actually get the canary; no
measurable performance or size overhead.
-fstack-protector-strong
Adds the canary for a wider set of functions, since it's not
just those with strings that have ultimately been vulnerable to
stack-busting. With this superset, more functions end up with a
canary, but it still remains small compared to all functions
with only a small change in performance. Based on the original
design document, a function gets the canary when it contains any
of:
- local variable's address used as part of the right hand side
of an assignment or function argument
- local variable is an array (or union containing an array),
regardless of array type or length
- uses register local variables
https://docs.google.com/a/google.com/document/d/1xXBH6rRZue4f296vGt9YQcuLVQHeE516stHwt8M9xyU
Find below a comparison of "size" and "objdump" output when built with
gcc-4.9 in three configurations:
- defconfig
11430641 kernel text size
36110 function bodies
- defconfig + CONFIG_CC_STACKPROTECTOR_REGULAR
11468490 kernel text size (+0.33%)
1015 of 36110 functions are stack-protected (2.81%)
- defconfig + CONFIG_CC_STACKPROTECTOR_STRONG via this patch
11692790 kernel text size (+2.24%)
7401 of 36110 functions are stack-protected (20.5%)
With -strong, ARM's compressed boot code now triggers stack
protection, so a static guard was added. Since this is only used
during decompression and was never used before, the exposure
here is very small. Once it switches to the full kernel, the
stack guard is back to normal.
Chrome OS has been using -fstack-protector-strong for its kernel
builds for the last 8 months with no problems.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1387481759-14535-3-git-send-email-keescook@chromium.org
[ Improved the changelog and descriptions some more. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-20 03:35:59 +08:00
|
|
|
|
This option turns on the "stack-protector" GCC feature. This
|
2013-12-20 03:35:58 +08:00
|
|
|
|
feature puts, at the beginning of functions, a canary value on
|
|
|
|
|
the stack just before the return address, and validates
|
|
|
|
|
the value just before actually returning. Stack based buffer
|
|
|
|
|
overflows (that need to overwrite this return address) now also
|
|
|
|
|
overwrite the canary, which gets detected and the attack is then
|
|
|
|
|
neutralized via a kernel panic.
|
|
|
|
|
|
stackprotector: Introduce CONFIG_CC_STACKPROTECTOR_STRONG
This changes the stack protector config option into a choice of
"None", "Regular", and "Strong":
CONFIG_CC_STACKPROTECTOR_NONE
CONFIG_CC_STACKPROTECTOR_REGULAR
CONFIG_CC_STACKPROTECTOR_STRONG
"Regular" means the old CONFIG_CC_STACKPROTECTOR=y option.
"Strong" is a new mode introduced by this patch. With "Strong" the
kernel is built with -fstack-protector-strong (available in
gcc 4.9 and later). This option increases the coverage of the stack
protector without the heavy performance hit of -fstack-protector-all.
For reference, the stack protector options available in gcc are:
-fstack-protector-all:
Adds the stack-canary saving prefix and stack-canary checking
suffix to _all_ function entry and exit. Results in substantial
use of stack space for saving the canary for deep stack users
(e.g. historically xfs), and measurable (though shockingly still
low) performance hit due to all the saving/checking. Really not
suitable for sane systems, and was entirely removed as an option
from the kernel many years ago.
-fstack-protector:
Adds the canary save/check to functions that define an 8
(--param=ssp-buffer-size=N, N=8 by default) or more byte local
char array. Traditionally, stack overflows happened with
string-based manipulations, so this was a way to find those
functions. Very few total functions actually get the canary; no
measurable performance or size overhead.
-fstack-protector-strong
Adds the canary for a wider set of functions, since it's not
just those with strings that have ultimately been vulnerable to
stack-busting. With this superset, more functions end up with a
canary, but it still remains small compared to all functions
with only a small change in performance. Based on the original
design document, a function gets the canary when it contains any
of:
- local variable's address used as part of the right hand side
of an assignment or function argument
- local variable is an array (or union containing an array),
regardless of array type or length
- uses register local variables
https://docs.google.com/a/google.com/document/d/1xXBH6rRZue4f296vGt9YQcuLVQHeE516stHwt8M9xyU
Find below a comparison of "size" and "objdump" output when built with
gcc-4.9 in three configurations:
- defconfig
11430641 kernel text size
36110 function bodies
- defconfig + CONFIG_CC_STACKPROTECTOR_REGULAR
11468490 kernel text size (+0.33%)
1015 of 36110 functions are stack-protected (2.81%)
- defconfig + CONFIG_CC_STACKPROTECTOR_STRONG via this patch
11692790 kernel text size (+2.24%)
7401 of 36110 functions are stack-protected (20.5%)
With -strong, ARM's compressed boot code now triggers stack
protection, so a static guard was added. Since this is only used
during decompression and was never used before, the exposure
here is very small. Once it switches to the full kernel, the
stack guard is back to normal.
Chrome OS has been using -fstack-protector-strong for its kernel
builds for the last 8 months with no problems.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1387481759-14535-3-git-send-email-keescook@chromium.org
[ Improved the changelog and descriptions some more. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-20 03:35:59 +08:00
|
|
|
|
Functions will have the stack-protector canary logic added if they
|
|
|
|
|
have an 8-byte or larger character array on the stack.
|
|
|
|
|
|
2013-12-20 03:35:58 +08:00
|
|
|
|
This feature requires gcc version 4.2 or above, or a distribution
|
stackprotector: Introduce CONFIG_CC_STACKPROTECTOR_STRONG
This changes the stack protector config option into a choice of
"None", "Regular", and "Strong":
CONFIG_CC_STACKPROTECTOR_NONE
CONFIG_CC_STACKPROTECTOR_REGULAR
CONFIG_CC_STACKPROTECTOR_STRONG
"Regular" means the old CONFIG_CC_STACKPROTECTOR=y option.
"Strong" is a new mode introduced by this patch. With "Strong" the
kernel is built with -fstack-protector-strong (available in
gcc 4.9 and later). This option increases the coverage of the stack
protector without the heavy performance hit of -fstack-protector-all.
For reference, the stack protector options available in gcc are:
-fstack-protector-all:
Adds the stack-canary saving prefix and stack-canary checking
suffix to _all_ function entry and exit. Results in substantial
use of stack space for saving the canary for deep stack users
(e.g. historically xfs), and measurable (though shockingly still
low) performance hit due to all the saving/checking. Really not
suitable for sane systems, and was entirely removed as an option
from the kernel many years ago.
-fstack-protector:
Adds the canary save/check to functions that define an 8
(--param=ssp-buffer-size=N, N=8 by default) or more byte local
char array. Traditionally, stack overflows happened with
string-based manipulations, so this was a way to find those
functions. Very few total functions actually get the canary; no
measurable performance or size overhead.
-fstack-protector-strong
Adds the canary for a wider set of functions, since it's not
just those with strings that have ultimately been vulnerable to
stack-busting. With this superset, more functions end up with a
canary, but it still remains small compared to all functions
with only a small change in performance. Based on the original
design document, a function gets the canary when it contains any
of:
- local variable's address used as part of the right hand side
of an assignment or function argument
- local variable is an array (or union containing an array),
regardless of array type or length
- uses register local variables
https://docs.google.com/a/google.com/document/d/1xXBH6rRZue4f296vGt9YQcuLVQHeE516stHwt8M9xyU
Find below a comparison of "size" and "objdump" output when built with
gcc-4.9 in three configurations:
- defconfig
11430641 kernel text size
36110 function bodies
- defconfig + CONFIG_CC_STACKPROTECTOR_REGULAR
11468490 kernel text size (+0.33%)
1015 of 36110 functions are stack-protected (2.81%)
- defconfig + CONFIG_CC_STACKPROTECTOR_STRONG via this patch
11692790 kernel text size (+2.24%)
7401 of 36110 functions are stack-protected (20.5%)
With -strong, ARM's compressed boot code now triggers stack
protection, so a static guard was added. Since this is only used
during decompression and was never used before, the exposure
here is very small. Once it switches to the full kernel, the
stack guard is back to normal.
Chrome OS has been using -fstack-protector-strong for its kernel
builds for the last 8 months with no problems.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1387481759-14535-3-git-send-email-keescook@chromium.org
[ Improved the changelog and descriptions some more. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-20 03:35:59 +08:00
|
|
|
|
gcc with the feature backported ("-fstack-protector").
|
|
|
|
|
|
|
|
|
|
On an x86 "defconfig" build, this feature adds canary checks to
|
|
|
|
|
about 3% of all kernel functions, which increases kernel code size
|
|
|
|
|
by about 0.3%.
|
|
|
|
|
|
Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.
That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.
HOWEVER.
It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y
and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.
The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).
This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 11:21:18 +08:00
|
|
|
|
config STACKPROTECTOR_STRONG
|
stack-protector: test compiler capability in Kconfig and drop AUTO mode
Move the test for -fstack-protector(-strong) option to Kconfig.
If the compiler does not support the option, the corresponding menu
is automatically hidden. If STRONG is not supported, it will fall
back to REGULAR. If REGULAR is not supported, it will be disabled.
This means, AUTO is implicitly handled by the dependency solver of
Kconfig, hence removed.
I also turned the 'choice' into only two boolean symbols. The use of
'choice' is not a good idea here, because all of all{yes,mod,no}config
would choose the first visible value, while we want allnoconfig to
disable as many features as possible.
X86 has additional shell scripts in case the compiler supports those
options, but generates broken code. I added CC_HAS_SANE_STACKPROTECTOR
to test this. I had to add -m32 to gcc-x86_32-has-stack-protector.sh
to make it work correctly.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
2018-05-28 17:22:00 +08:00
|
|
|
|
bool "Strong Stack Protector"
|
Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.
That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.
HOWEVER.
It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y
and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.
The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).
This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 11:21:18 +08:00
|
|
|
|
depends on STACKPROTECTOR
|
stack-protector: test compiler capability in Kconfig and drop AUTO mode
Move the test for -fstack-protector(-strong) option to Kconfig.
If the compiler does not support the option, the corresponding menu
is automatically hidden. If STRONG is not supported, it will fall
back to REGULAR. If REGULAR is not supported, it will be disabled.
This means, AUTO is implicitly handled by the dependency solver of
Kconfig, hence removed.
I also turned the 'choice' into only two boolean symbols. The use of
'choice' is not a good idea here, because all of all{yes,mod,no}config
would choose the first visible value, while we want allnoconfig to
disable as many features as possible.
X86 has additional shell scripts in case the compiler supports those
options, but generates broken code. I added CC_HAS_SANE_STACKPROTECTOR
to test this. I had to add -m32 to gcc-x86_32-has-stack-protector.sh
to make it work correctly.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
2018-05-28 17:22:00 +08:00
|
|
|
|
depends on $(cc-option,-fstack-protector-strong)
|
|
|
|
|
default y
|
stackprotector: Introduce CONFIG_CC_STACKPROTECTOR_STRONG
This changes the stack protector config option into a choice of
"None", "Regular", and "Strong":
CONFIG_CC_STACKPROTECTOR_NONE
CONFIG_CC_STACKPROTECTOR_REGULAR
CONFIG_CC_STACKPROTECTOR_STRONG
"Regular" means the old CONFIG_CC_STACKPROTECTOR=y option.
"Strong" is a new mode introduced by this patch. With "Strong" the
kernel is built with -fstack-protector-strong (available in
gcc 4.9 and later). This option increases the coverage of the stack
protector without the heavy performance hit of -fstack-protector-all.
For reference, the stack protector options available in gcc are:
-fstack-protector-all:
Adds the stack-canary saving prefix and stack-canary checking
suffix to _all_ function entry and exit. Results in substantial
use of stack space for saving the canary for deep stack users
(e.g. historically xfs), and measurable (though shockingly still
low) performance hit due to all the saving/checking. Really not
suitable for sane systems, and was entirely removed as an option
from the kernel many years ago.
-fstack-protector:
Adds the canary save/check to functions that define an 8
(--param=ssp-buffer-size=N, N=8 by default) or more byte local
char array. Traditionally, stack overflows happened with
string-based manipulations, so this was a way to find those
functions. Very few total functions actually get the canary; no
measurable performance or size overhead.
-fstack-protector-strong
Adds the canary for a wider set of functions, since it's not
just those with strings that have ultimately been vulnerable to
stack-busting. With this superset, more functions end up with a
canary, but it still remains small compared to all functions
with only a small change in performance. Based on the original
design document, a function gets the canary when it contains any
of:
- local variable's address used as part of the right hand side
of an assignment or function argument
- local variable is an array (or union containing an array),
regardless of array type or length
- uses register local variables
https://docs.google.com/a/google.com/document/d/1xXBH6rRZue4f296vGt9YQcuLVQHeE516stHwt8M9xyU
Find below a comparison of "size" and "objdump" output when built with
gcc-4.9 in three configurations:
- defconfig
11430641 kernel text size
36110 function bodies
- defconfig + CONFIG_CC_STACKPROTECTOR_REGULAR
11468490 kernel text size (+0.33%)
1015 of 36110 functions are stack-protected (2.81%)
- defconfig + CONFIG_CC_STACKPROTECTOR_STRONG via this patch
11692790 kernel text size (+2.24%)
7401 of 36110 functions are stack-protected (20.5%)
With -strong, ARM's compressed boot code now triggers stack
protection, so a static guard was added. Since this is only used
during decompression and was never used before, the exposure
here is very small. Once it switches to the full kernel, the
stack guard is back to normal.
Chrome OS has been using -fstack-protector-strong for its kernel
builds for the last 8 months with no problems.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1387481759-14535-3-git-send-email-keescook@chromium.org
[ Improved the changelog and descriptions some more. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-20 03:35:59 +08:00
|
|
|
|
help
|
|
|
|
|
Functions will have the stack-protector canary logic added in any
|
|
|
|
|
of the following conditions:
|
|
|
|
|
|
|
|
|
|
- local variable's address used as part of the right hand side of an
|
|
|
|
|
assignment or function argument
|
|
|
|
|
- local variable is an array (or union containing an array),
|
|
|
|
|
regardless of array type or length
|
|
|
|
|
- uses register local variables
|
|
|
|
|
|
|
|
|
|
This feature requires gcc version 4.9 or above, or a distribution
|
|
|
|
|
gcc with the feature backported ("-fstack-protector-strong").
|
|
|
|
|
|
|
|
|
|
On an x86 "defconfig" build, this feature adds canary checks to
|
|
|
|
|
about 20% of all kernel functions, which increases the kernel code
|
|
|
|
|
size by about 2%.
|
|
|
|
|
|
2020-04-28 00:00:07 +08:00
|
|
|
|
config ARCH_SUPPORTS_SHADOW_CALL_STACK
|
|
|
|
|
bool
|
|
|
|
|
help
|
arm64: Add gcc Shadow Call Stack support
Shadow call stacks will be available in GCC >= 12, this patch makes
the corresponding kernel configuration available when compiling
the kernel with the gcc.
Note that the implementation in GCC is slightly different from Clang.
With SCS enabled, functions will only pop x30 once in the epilogue,
like:
str x30, [x18], #8
stp x29, x30, [sp, #-16]!
......
- ldp x29, x30, [sp], #16 //clang
+ ldr x29, [sp], #16 //GCC
ldr x30, [x18, #-8]!
Link: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=ce09ab17ddd21f73ff2caf6eec3b0ee9b0e1a11e
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Dan Li <ashimida@linux.alibaba.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220303074323.86282-1-ashimida@linux.alibaba.com
2022-03-03 15:43:23 +08:00
|
|
|
|
An architecture should select this if it supports the compiler's
|
|
|
|
|
Shadow Call Stack and implements runtime support for shadow stack
|
2020-05-15 23:15:46 +08:00
|
|
|
|
switching.
|
2020-04-28 00:00:07 +08:00
|
|
|
|
|
|
|
|
|
config SHADOW_CALL_STACK
|
arm64: Add gcc Shadow Call Stack support
Shadow call stacks will be available in GCC >= 12, this patch makes
the corresponding kernel configuration available when compiling
the kernel with the gcc.
Note that the implementation in GCC is slightly different from Clang.
With SCS enabled, functions will only pop x30 once in the epilogue,
like:
str x30, [x18], #8
stp x29, x30, [sp, #-16]!
......
- ldp x29, x30, [sp], #16 //clang
+ ldr x29, [sp], #16 //GCC
ldr x30, [x18, #-8]!
Link: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=ce09ab17ddd21f73ff2caf6eec3b0ee9b0e1a11e
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Dan Li <ashimida@linux.alibaba.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220303074323.86282-1-ashimida@linux.alibaba.com
2022-03-03 15:43:23 +08:00
|
|
|
|
bool "Shadow Call Stack"
|
|
|
|
|
depends on ARCH_SUPPORTS_SHADOW_CALL_STACK
|
2020-04-28 00:00:10 +08:00
|
|
|
|
depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER
|
2020-04-28 00:00:07 +08:00
|
|
|
|
help
|
arm64: Add gcc Shadow Call Stack support
Shadow call stacks will be available in GCC >= 12, this patch makes
the corresponding kernel configuration available when compiling
the kernel with the gcc.
Note that the implementation in GCC is slightly different from Clang.
With SCS enabled, functions will only pop x30 once in the epilogue,
like:
str x30, [x18], #8
stp x29, x30, [sp, #-16]!
......
- ldp x29, x30, [sp], #16 //clang
+ ldr x29, [sp], #16 //GCC
ldr x30, [x18, #-8]!
Link: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=ce09ab17ddd21f73ff2caf6eec3b0ee9b0e1a11e
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Dan Li <ashimida@linux.alibaba.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220303074323.86282-1-ashimida@linux.alibaba.com
2022-03-03 15:43:23 +08:00
|
|
|
|
This option enables the compiler's Shadow Call Stack, which
|
|
|
|
|
uses a shadow stack to protect function return addresses from
|
|
|
|
|
being overwritten by an attacker. More information can be found
|
|
|
|
|
in the compiler's documentation:
|
2020-04-28 00:00:07 +08:00
|
|
|
|
|
arm64: Add gcc Shadow Call Stack support
Shadow call stacks will be available in GCC >= 12, this patch makes
the corresponding kernel configuration available when compiling
the kernel with the gcc.
Note that the implementation in GCC is slightly different from Clang.
With SCS enabled, functions will only pop x30 once in the epilogue,
like:
str x30, [x18], #8
stp x29, x30, [sp, #-16]!
......
- ldp x29, x30, [sp], #16 //clang
+ ldr x29, [sp], #16 //GCC
ldr x30, [x18, #-8]!
Link: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=ce09ab17ddd21f73ff2caf6eec3b0ee9b0e1a11e
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Dan Li <ashimida@linux.alibaba.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220303074323.86282-1-ashimida@linux.alibaba.com
2022-03-03 15:43:23 +08:00
|
|
|
|
- Clang: https://clang.llvm.org/docs/ShadowCallStack.html
|
|
|
|
|
- GCC: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html#Instrumentation-Options
|
2020-04-28 00:00:07 +08:00
|
|
|
|
|
|
|
|
|
Note that security guarantees in the kernel differ from the
|
|
|
|
|
ones documented for user space. The kernel must store addresses
|
|
|
|
|
of shadow stacks in memory, which means an attacker capable of
|
|
|
|
|
reading and writing arbitrary memory may be able to locate them
|
|
|
|
|
and hijack control flow by modifying the stacks.
|
|
|
|
|
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
|
config LTO
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Selected if the kernel will be built using the compiler's LTO feature.
|
|
|
|
|
|
|
|
|
|
config LTO_CLANG
|
|
|
|
|
bool
|
|
|
|
|
select LTO
|
|
|
|
|
help
|
|
|
|
|
Selected if the kernel will be built using Clang's LTO feature.
|
|
|
|
|
|
|
|
|
|
config ARCH_SUPPORTS_LTO_CLANG
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this option if it supports:
|
|
|
|
|
- compiling with Clang,
|
|
|
|
|
- compiling inline assembly with Clang's integrated assembler,
|
|
|
|
|
- and linking with LLD.
|
|
|
|
|
|
|
|
|
|
config ARCH_SUPPORTS_LTO_CLANG_THIN
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this option if it can support Clang's
|
|
|
|
|
ThinLTO mode.
|
|
|
|
|
|
|
|
|
|
config HAS_LTO_CLANG
|
|
|
|
|
def_bool y
|
2021-11-30 00:58:00 +08:00
|
|
|
|
depends on CC_IS_CLANG && LD_IS_LLD && AS_IS_LLVM
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
|
depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm)
|
|
|
|
|
depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm)
|
|
|
|
|
depends on ARCH_SUPPORTS_LTO_CLANG
|
|
|
|
|
depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT
|
2021-03-09 02:46:56 +08:00
|
|
|
|
depends on !KASAN || KASAN_HW_TAGS
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
|
depends on !GCOV_KERNEL
|
|
|
|
|
help
|
|
|
|
|
The compiler and Kconfig options support building with Clang's
|
|
|
|
|
LTO.
|
|
|
|
|
|
|
|
|
|
choice
|
|
|
|
|
prompt "Link Time Optimization (LTO)"
|
|
|
|
|
default LTO_NONE
|
|
|
|
|
help
|
|
|
|
|
This option enables Link Time Optimization (LTO), which allows the
|
|
|
|
|
compiler to optimize binaries globally.
|
|
|
|
|
|
|
|
|
|
If unsure, select LTO_NONE. Note that LTO is very resource-intensive
|
|
|
|
|
so it's disabled by default.
|
|
|
|
|
|
|
|
|
|
config LTO_NONE
|
|
|
|
|
bool "None"
|
|
|
|
|
help
|
|
|
|
|
Build the kernel normally, without Link Time Optimization (LTO).
|
|
|
|
|
|
|
|
|
|
config LTO_CLANG_FULL
|
|
|
|
|
bool "Clang Full LTO (EXPERIMENTAL)"
|
|
|
|
|
depends on HAS_LTO_CLANG
|
|
|
|
|
depends on !COMPILE_TEST
|
|
|
|
|
select LTO_CLANG
|
|
|
|
|
help
|
|
|
|
|
This option enables Clang's full Link Time Optimization (LTO), which
|
|
|
|
|
allows the compiler to optimize the kernel globally. If you enable
|
|
|
|
|
this option, the compiler generates LLVM bitcode instead of ELF
|
|
|
|
|
object files, and the actual compilation from bitcode happens at
|
|
|
|
|
the LTO link step, which may take several minutes depending on the
|
|
|
|
|
kernel configuration. More information can be found from LLVM's
|
|
|
|
|
documentation:
|
|
|
|
|
|
|
|
|
|
https://llvm.org/docs/LinkTimeOptimization.html
|
|
|
|
|
|
|
|
|
|
During link time, this option can use a large amount of RAM, and
|
|
|
|
|
may take much longer than the ThinLTO option.
|
|
|
|
|
|
|
|
|
|
config LTO_CLANG_THIN
|
|
|
|
|
bool "Clang ThinLTO (EXPERIMENTAL)"
|
|
|
|
|
depends on HAS_LTO_CLANG && ARCH_SUPPORTS_LTO_CLANG_THIN
|
|
|
|
|
select LTO_CLANG
|
|
|
|
|
help
|
|
|
|
|
This option enables Clang's ThinLTO, which allows for parallel
|
|
|
|
|
optimization and faster incremental compiles compared to the
|
|
|
|
|
CONFIG_LTO_CLANG_FULL option. More information can be found
|
|
|
|
|
from Clang's documentation:
|
|
|
|
|
|
|
|
|
|
https://clang.llvm.org/docs/ThinLTO.html
|
|
|
|
|
|
|
|
|
|
If unsure, say Y.
|
|
|
|
|
endchoice
|
|
|
|
|
|
add support for Clang CFI
This change adds support for Clang’s forward-edge Control Flow
Integrity (CFI) checking. With CONFIG_CFI_CLANG, the compiler
injects a runtime check before each indirect function call to ensure
the target is a valid function with the correct static type. This
restricts possible call targets and makes it more difficult for
an attacker to exploit bugs that allow the modification of stored
function pointers. For more details, see:
https://clang.llvm.org/docs/ControlFlowIntegrity.html
Clang requires CONFIG_LTO_CLANG to be enabled with CFI to gain
visibility to possible call targets. Kernel modules are supported
with Clang’s cross-DSO CFI mode, which allows checking between
independently compiled components.
With CFI enabled, the compiler injects a __cfi_check() function into
the kernel and each module for validating local call targets. For
cross-module calls that cannot be validated locally, the compiler
calls the global __cfi_slowpath_diag() function, which determines
the target module and calls the correct __cfi_check() function. This
patch includes a slowpath implementation that uses __module_address()
to resolve call targets, and with CONFIG_CFI_CLANG_SHADOW enabled, a
shadow map that speeds up module look-ups by ~3x.
Clang implements indirect call checking using jump tables and
offers two methods of generating them. With canonical jump tables,
the compiler renames each address-taken function to <function>.cfi
and points the original symbol to a jump table entry, which passes
__cfi_check() validation. This isn’t compatible with stand-alone
assembly code, which the compiler doesn’t instrument, and would
result in indirect calls to assembly code to fail. Therefore, we
default to using non-canonical jump tables instead, where the compiler
generates a local jump table entry <function>.cfi_jt for each
address-taken function, and replaces all references to the function
with the address of the jump table entry.
Note that because non-canonical jump table addresses are local
to each component, they break cross-module function address
equality. Specifically, the address of a global function will be
different in each module, as it's replaced with the address of a local
jump table entry. If this address is passed to a different module,
it won’t match the address of the same function taken there. This
may break code that relies on comparing addresses passed from other
components.
CFI checking can be disabled in a function with the __nocfi attribute.
Additionally, CFI can be disabled for an entire compilation unit by
filtering out CC_FLAGS_CFI.
By default, CFI failures result in a kernel panic to stop a potential
exploit. CONFIG_CFI_PERMISSIVE enables a permissive mode, where the
kernel prints out a rate-limited warning instead, and allows execution
to continue. This option is helpful for locating type mismatches, but
should only be enabled during development.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20210408182843.1754385-2-samitolvanen@google.com
2021-04-09 02:28:26 +08:00
|
|
|
|
config ARCH_SUPPORTS_CFI_CLANG
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this option if it can support Clang's
|
|
|
|
|
Control-Flow Integrity (CFI) checking.
|
|
|
|
|
|
|
|
|
|
config CFI_CLANG
|
|
|
|
|
bool "Use Clang's Control Flow Integrity (CFI)"
|
|
|
|
|
depends on LTO_CLANG && ARCH_SUPPORTS_CFI_CLANG
|
2022-04-06 06:16:18 +08:00
|
|
|
|
depends on CLANG_VERSION >= 140000
|
add support for Clang CFI
This change adds support for Clang’s forward-edge Control Flow
Integrity (CFI) checking. With CONFIG_CFI_CLANG, the compiler
injects a runtime check before each indirect function call to ensure
the target is a valid function with the correct static type. This
restricts possible call targets and makes it more difficult for
an attacker to exploit bugs that allow the modification of stored
function pointers. For more details, see:
https://clang.llvm.org/docs/ControlFlowIntegrity.html
Clang requires CONFIG_LTO_CLANG to be enabled with CFI to gain
visibility to possible call targets. Kernel modules are supported
with Clang’s cross-DSO CFI mode, which allows checking between
independently compiled components.
With CFI enabled, the compiler injects a __cfi_check() function into
the kernel and each module for validating local call targets. For
cross-module calls that cannot be validated locally, the compiler
calls the global __cfi_slowpath_diag() function, which determines
the target module and calls the correct __cfi_check() function. This
patch includes a slowpath implementation that uses __module_address()
to resolve call targets, and with CONFIG_CFI_CLANG_SHADOW enabled, a
shadow map that speeds up module look-ups by ~3x.
Clang implements indirect call checking using jump tables and
offers two methods of generating them. With canonical jump tables,
the compiler renames each address-taken function to <function>.cfi
and points the original symbol to a jump table entry, which passes
__cfi_check() validation. This isn’t compatible with stand-alone
assembly code, which the compiler doesn’t instrument, and would
result in indirect calls to assembly code to fail. Therefore, we
default to using non-canonical jump tables instead, where the compiler
generates a local jump table entry <function>.cfi_jt for each
address-taken function, and replaces all references to the function
with the address of the jump table entry.
Note that because non-canonical jump table addresses are local
to each component, they break cross-module function address
equality. Specifically, the address of a global function will be
different in each module, as it's replaced with the address of a local
jump table entry. If this address is passed to a different module,
it won’t match the address of the same function taken there. This
may break code that relies on comparing addresses passed from other
components.
CFI checking can be disabled in a function with the __nocfi attribute.
Additionally, CFI can be disabled for an entire compilation unit by
filtering out CC_FLAGS_CFI.
By default, CFI failures result in a kernel panic to stop a potential
exploit. CONFIG_CFI_PERMISSIVE enables a permissive mode, where the
kernel prints out a rate-limited warning instead, and allows execution
to continue. This option is helpful for locating type mismatches, but
should only be enabled during development.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20210408182843.1754385-2-samitolvanen@google.com
2021-04-09 02:28:26 +08:00
|
|
|
|
select KALLSYMS
|
|
|
|
|
help
|
|
|
|
|
This option enables Clang’s forward-edge Control Flow Integrity
|
|
|
|
|
(CFI) checking, where the compiler injects a runtime check to each
|
|
|
|
|
indirect function call to ensure the target is a valid function with
|
|
|
|
|
the correct static type. This restricts possible call targets and
|
|
|
|
|
makes it more difficult for an attacker to exploit bugs that allow
|
|
|
|
|
the modification of stored function pointers. More information can be
|
|
|
|
|
found from Clang's documentation:
|
|
|
|
|
|
|
|
|
|
https://clang.llvm.org/docs/ControlFlowIntegrity.html
|
|
|
|
|
|
|
|
|
|
config CFI_CLANG_SHADOW
|
|
|
|
|
bool "Use CFI shadow to speed up cross-module checks"
|
|
|
|
|
default y
|
|
|
|
|
depends on CFI_CLANG && MODULES
|
|
|
|
|
help
|
|
|
|
|
If you select this option, the kernel builds a fast look-up table of
|
|
|
|
|
CFI check functions in loaded modules to reduce performance overhead.
|
|
|
|
|
|
|
|
|
|
If unsure, say Y.
|
|
|
|
|
|
|
|
|
|
config CFI_PERMISSIVE
|
|
|
|
|
bool "Use CFI in permissive mode"
|
|
|
|
|
depends on CFI_CLANG
|
|
|
|
|
help
|
|
|
|
|
When selected, Control Flow Integrity (CFI) violations result in a
|
|
|
|
|
warning instead of a kernel panic. This option should only be used
|
|
|
|
|
for finding indirect call type mismatches during development.
|
|
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
|
2016-07-13 07:19:48 +08:00
|
|
|
|
config HAVE_ARCH_WITHIN_STACK_FRAMES
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this if it can walk the kernel stack
|
|
|
|
|
frames to determine if an object is part of either the arguments
|
|
|
|
|
or local variables (i.e. that it excludes saved return addresses,
|
|
|
|
|
and similar) by implementing an inline arch_within_stack_frames(),
|
|
|
|
|
which is used by CONFIG_HARDENED_USERCOPY.
|
|
|
|
|
|
2012-11-28 02:33:25 +08:00
|
|
|
|
config HAVE_CONTEXT_TRACKING
|
2012-07-12 02:26:30 +08:00
|
|
|
|
bool
|
|
|
|
|
help
|
2012-11-28 02:33:25 +08:00
|
|
|
|
Provide kernel/user boundaries probes necessary for subsystems
|
|
|
|
|
that need it, such as userspace RCU extended quiescent state.
|
2020-01-27 23:41:52 +08:00
|
|
|
|
Syscalls need to be wrapped inside user_exit()-user_enter(), either
|
|
|
|
|
optimized behind static key or through the slow path using TIF_NOHZ
|
|
|
|
|
flag. Exceptions handlers must be wrapped as well. Irqs are already
|
|
|
|
|
protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal
|
|
|
|
|
handling on irq exit still need to be protected.
|
|
|
|
|
|
context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
Historically, context tracking had to deal with fragile entry code path,
ie: before user_exit() is called and after user_enter() is called, in
case some of those spots would call schedule() or use RCU. On such
cases, the site had to be protected between exception_enter() and
exception_exit() that save the context tracking state in the task stack.
Such sleepable fragile code path had many different origins: tracing,
exceptions, early or late calls to context tracking on syscalls...
Aside of that not being pretty, saving the context tracking state on
the task stack forces us to run context tracking on all CPUs, including
housekeepers, and prevents us to completely shutdown nohz_full at
runtime on a CPU in the future as context tracking and its overhead
would still need to run system wide.
Now thanks to the extensive efforts to sanitize x86 entry code, those
conditions have been removed and we can now get rid of these workarounds
in this architecture.
Create a Kconfig feature to express this achievement.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20201117151637.259084-2-frederic@kernel.org
2020-11-17 23:16:33 +08:00
|
|
|
|
config HAVE_CONTEXT_TRACKING_OFFSTACK
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture neither relies on exception_enter()/exception_exit()
|
|
|
|
|
nor on schedule_user(). Also preempt_schedule_notrace() and
|
|
|
|
|
preempt_schedule_irq() can't be called in a preemptible section
|
|
|
|
|
while context tracking is CONTEXT_USER. This feature reflects a sane
|
|
|
|
|
entry implementation where the following requirements are met on
|
|
|
|
|
critical entry code, ie: before user_exit() or after user_enter():
|
|
|
|
|
|
|
|
|
|
- Critical entry code isn't preemptible (or better yet:
|
|
|
|
|
not interruptible).
|
|
|
|
|
- No use of RCU read side critical sections, unless rcu_nmi_enter()
|
|
|
|
|
got called.
|
|
|
|
|
- No use of instrumentation, unless instrumentation_begin() got
|
|
|
|
|
called.
|
|
|
|
|
|
2020-01-27 23:41:52 +08:00
|
|
|
|
config HAVE_TIF_NOHZ
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Arch relies on TIF_NOHZ and syscall slow path to implement context
|
|
|
|
|
tracking calls to user_enter()/user_exit().
|
2012-07-12 02:26:30 +08:00
|
|
|
|
|
2012-06-16 21:39:34 +08:00
|
|
|
|
config HAVE_VIRT_CPU_ACCOUNTING
|
|
|
|
|
bool
|
|
|
|
|
|
2020-12-02 19:57:29 +08:00
|
|
|
|
config HAVE_VIRT_CPU_ACCOUNTING_IDLE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has its own way to account idle CPU time and therefore
|
|
|
|
|
doesn't implement vtime_account_idle().
|
|
|
|
|
|
2016-11-15 10:06:51 +08:00
|
|
|
|
config ARCH_HAS_SCALED_CPUTIME
|
|
|
|
|
bool
|
|
|
|
|
|
2013-09-17 06:28:21 +08:00
|
|
|
|
config HAVE_VIRT_CPU_ACCOUNTING_GEN
|
|
|
|
|
bool
|
|
|
|
|
default y if 64BIT
|
|
|
|
|
help
|
|
|
|
|
With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit.
|
|
|
|
|
Before enabling this option, arch code must be audited
|
|
|
|
|
to ensure there are no races in concurrent read/write of
|
|
|
|
|
cputime_t. For example, reading/writing 64-bit cputime_t on
|
|
|
|
|
some 32-bit arches may require multiple accesses, so proper
|
|
|
|
|
locking is needed to protect against concurrent accesses.
|
|
|
|
|
|
2012-09-09 20:56:31 +08:00
|
|
|
|
config HAVE_IRQ_TIME_ACCOUNTING
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Archs need to ensure they use a high enough resolution clock to
|
|
|
|
|
support irq time accounting and then call enable_sched_clock_irqtime().
|
|
|
|
|
|
2020-12-15 11:07:30 +08:00
|
|
|
|
config HAVE_MOVE_PUD
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architectures that select this are able to move page tables at the
|
|
|
|
|
PUD level. If there are only 3 page table levels, the move effectively
|
|
|
|
|
happens at the PGD level.
|
|
|
|
|
|
2019-01-04 07:28:38 +08:00
|
|
|
|
config HAVE_MOVE_PMD
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Archs that select this are able to move page tables at the PMD level.
|
|
|
|
|
|
2012-10-09 07:30:04 +08:00
|
|
|
|
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
|
|
|
|
|
bool
|
|
|
|
|
|
2017-02-25 06:57:02 +08:00
|
|
|
|
config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
|
|
|
|
|
bool
|
|
|
|
|
|
2015-04-15 06:47:20 +08:00
|
|
|
|
config HAVE_ARCH_HUGE_VMAP
|
|
|
|
|
bool
|
|
|
|
|
|
2021-04-30 13:58:49 +08:00
|
|
|
|
#
|
|
|
|
|
# Archs that select this would be capable of PMD-sized vmaps (i.e.,
|
2022-04-16 00:44:10 +08:00
|
|
|
|
# arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag
|
|
|
|
|
# must be used to enable allocations to use hugepages.
|
2021-04-30 13:58:49 +08:00
|
|
|
|
#
|
|
|
|
|
config HAVE_ARCH_HUGE_VMALLOC
|
|
|
|
|
depends on HAVE_ARCH_HUGE_VMAP
|
|
|
|
|
bool
|
|
|
|
|
|
2019-06-28 06:00:11 +08:00
|
|
|
|
config ARCH_WANT_HUGE_PMD_SHARE
|
|
|
|
|
bool
|
|
|
|
|
|
mm: soft-dirty bits for user memory changes tracking
The soft-dirty is a bit on a PTE which helps to track which pages a task
writes to. In order to do this tracking one should
1. Clear soft-dirty bits from PTEs ("echo 4 > /proc/PID/clear_refs)
2. Wait some time.
3. Read soft-dirty bits (55'th in /proc/PID/pagemap2 entries)
To do this tracking, the writable bit is cleared from PTEs when the
soft-dirty bit is. Thus, after this, when the task tries to modify a
page at some virtual address the #PF occurs and the kernel sets the
soft-dirty bit on the respective PTE.
Note, that although all the task's address space is marked as r/o after
the soft-dirty bits clear, the #PF-s that occur after that are processed
fast. This is so, since the pages are still mapped to physical memory,
and thus all the kernel does is finds this fact out and puts back
writable, dirty and soft-dirty bits on the PTE.
Another thing to note, is that when mremap moves PTEs they are marked
with soft-dirty as well, since from the user perspective mremap modifies
the virtual memory at mremap's new address.
Signed-off-by: Pavel Emelyanov <xemul@parallels.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Glauber Costa <glommer@parallels.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-04 06:01:20 +08:00
|
|
|
|
config HAVE_ARCH_SOFT_DIRTY
|
|
|
|
|
bool
|
|
|
|
|
|
2012-09-28 13:01:03 +08:00
|
|
|
|
config HAVE_MOD_ARCH_SPECIFIC
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
The arch uses struct mod_arch_specific to store data. Many arches
|
|
|
|
|
just need a simple module loader without arch specific data - those
|
|
|
|
|
should not enable this.
|
|
|
|
|
|
|
|
|
|
config MODULES_USE_ELF_RELA
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Modules only use ELF RELA relocations. Modules with ELF REL
|
|
|
|
|
relocations will give an error.
|
|
|
|
|
|
|
|
|
|
config MODULES_USE_ELF_REL
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Modules only use ELF REL relocations. Modules with ELF RELA
|
|
|
|
|
relocations will give an error.
|
|
|
|
|
|
2022-02-23 20:02:14 +08:00
|
|
|
|
config ARCH_WANTS_MODULES_DATA_IN_VMALLOC
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
For architectures like powerpc/32 which have constraints on module
|
|
|
|
|
allocation and need to allocate module data outside of module area.
|
|
|
|
|
|
2013-09-24 23:17:47 +08:00
|
|
|
|
config HAVE_IRQ_EXIT_ON_IRQ_STACK
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture doesn't only execute the irq handler on the irq stack
|
|
|
|
|
but also irq_exit(). This way we can process softirqs on this irq
|
|
|
|
|
stack instead of switching to a new one when we call __do_softirq()
|
|
|
|
|
in the end of an hardirq.
|
|
|
|
|
This spares a stack switch and improves cache usage on softirq
|
|
|
|
|
processing.
|
|
|
|
|
|
2021-02-10 07:40:52 +08:00
|
|
|
|
config HAVE_SOFTIRQ_ON_OWN_STACK
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture provides a function to run __do_softirq() on a
|
2021-09-08 10:57:38 +08:00
|
|
|
|
separate stack.
|
2021-02-10 07:40:52 +08:00
|
|
|
|
|
uaccess: generalize access_ok()
There are many different ways that access_ok() is defined across
architectures, but in the end, they all just compare against the
user_addr_max() value or they accept anything.
Provide one definition that works for most architectures, checking
against TASK_SIZE_MAX for user processes or skipping the check inside
of uaccess_kernel() sections.
For architectures without CONFIG_SET_FS(), this should be the fastest
check, as it comes down to a single comparison of a pointer against a
compile-time constant, while the architecture specific versions tend to
do something more complex for historic reasons or get something wrong.
Type checking for __user annotations is handled inconsistently across
architectures, but this is easily simplified as well by using an inline
function that takes a 'const void __user *' argument. A handful of
callers need an extra __user annotation for this.
Some architectures had trick to use 33-bit or 65-bit arithmetic on the
addresses to calculate the overflow, however this simpler version uses
fewer registers, which means it can produce better object code in the
end despite needing a second (statically predicted) branch.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64, asm-generic]
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Dinh Nguyen <dinguyen@kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-16 00:55:04 +08:00
|
|
|
|
config ALTERNATE_USER_ADDRESS_SPACE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architectures set this when the CPU uses separate address
|
|
|
|
|
spaces for kernel and user space pointers. In this case, the
|
|
|
|
|
access_ok() check on a __user pointer is skipped.
|
|
|
|
|
|
2015-04-15 06:46:17 +08:00
|
|
|
|
config PGTABLE_LEVELS
|
|
|
|
|
int
|
|
|
|
|
default 2
|
|
|
|
|
|
2015-04-15 06:48:00 +08:00
|
|
|
|
config ARCH_HAS_ELF_RANDOMIZE
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture supports choosing randomized locations for
|
|
|
|
|
stack, mmap, brk, and ET_DYN. Defined functions:
|
|
|
|
|
- arch_mmap_rnd()
|
2015-04-15 06:48:12 +08:00
|
|
|
|
- arch_randomize_brk()
|
2015-04-15 06:48:00 +08:00
|
|
|
|
|
mm: mmap: add new /proc tunable for mmap_base ASLR
Address Space Layout Randomization (ASLR) provides a barrier to
exploitation of user-space processes in the presence of security
vulnerabilities by making it more difficult to find desired code/data
which could help an attack. This is done by adding a random offset to
the location of regions in the process address space, with a greater
range of potential offset values corresponding to better protection/a
larger search-space for brute force, but also to greater potential for
fragmentation.
The offset added to the mmap_base address, which provides the basis for
the majority of the mappings for a process, is set once on process exec
in arch_pick_mmap_layout() and is done via hard-coded per-arch values,
which reflect, hopefully, the best compromise for all systems. The
trade-off between increased entropy in the offset value generation and
the corresponding increased variability in address space fragmentation
is not absolute, however, and some platforms may tolerate higher amounts
of entropy. This patch introduces both new Kconfig values and a sysctl
interface which may be used to change the amount of entropy used for
offset generation on a system.
The direct motivation for this change was in response to the
libstagefright vulnerabilities that affected Android, specifically to
information provided by Google's project zero at:
http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html
The attack presented therein, by Google's project zero, specifically
targeted the limited randomness used to generate the offset added to the
mmap_base address in order to craft a brute-force-based attack.
Concretely, the attack was against the mediaserver process, which was
limited to respawning every 5 seconds, on an arm device. The hard-coded
8 bits used resulted in an average expected success rate of defeating
the mmap ASLR after just over 10 minutes (128 tries at 5 seconds a
piece). With this patch, and an accompanying increase in the entropy
value to 16 bits, the same attack would take an average expected time of
over 45 hours (32768 tries), which makes it both less feasible and more
likely to be noticed.
The introduced Kconfig and sysctl options are limited by per-arch
minimum and maximum values, the minimum of which was chosen to match the
current hard-coded value and the maximum of which was chosen so as to
give the greatest flexibility without generating an invalid mmap_base
address, generally a 3-4 bits less than the number of bits in the
user-space accessible virtual address space.
When decided whether or not to change the default value, a system
developer should consider that mmap_base address could be placed
anywhere up to 2^(value) bits away from the non-randomized location,
which would introduce variable-sized areas above and below the mmap_base
address such that the maximum vm_area_struct size may be reduced,
preventing very large allocations.
This patch (of 4):
ASLR only uses as few as 8 bits to generate the random offset for the
mmap base address on 32 bit architectures. This value was chosen to
prevent a poorly chosen value from dividing the address space in such a
way as to prevent large allocations. This may not be an issue on all
platforms. Allow the specification of a minimum number of bits so that
platforms desiring greater ASLR protection may determine where to place
the trade-off.
Signed-off-by: Daniel Cashman <dcashman@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Cc: Jeff Vander Stoep <jeffv@google.com>
Cc: Nick Kralevich <nnk@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Borislav Petkov <bp@suse.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 07:19:53 +08:00
|
|
|
|
config HAVE_ARCH_MMAP_RND_BITS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if it supports setting a variable
|
|
|
|
|
number of bits for use in establishing the base address for mmap
|
|
|
|
|
allocations, has MMU enabled and provides values for both:
|
|
|
|
|
- ARCH_MMAP_RND_BITS_MIN
|
|
|
|
|
- ARCH_MMAP_RND_BITS_MAX
|
|
|
|
|
|
2016-05-21 08:00:16 +08:00
|
|
|
|
config HAVE_EXIT_THREAD
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture implements exit_thread.
|
|
|
|
|
|
mm: mmap: add new /proc tunable for mmap_base ASLR
Address Space Layout Randomization (ASLR) provides a barrier to
exploitation of user-space processes in the presence of security
vulnerabilities by making it more difficult to find desired code/data
which could help an attack. This is done by adding a random offset to
the location of regions in the process address space, with a greater
range of potential offset values corresponding to better protection/a
larger search-space for brute force, but also to greater potential for
fragmentation.
The offset added to the mmap_base address, which provides the basis for
the majority of the mappings for a process, is set once on process exec
in arch_pick_mmap_layout() and is done via hard-coded per-arch values,
which reflect, hopefully, the best compromise for all systems. The
trade-off between increased entropy in the offset value generation and
the corresponding increased variability in address space fragmentation
is not absolute, however, and some platforms may tolerate higher amounts
of entropy. This patch introduces both new Kconfig values and a sysctl
interface which may be used to change the amount of entropy used for
offset generation on a system.
The direct motivation for this change was in response to the
libstagefright vulnerabilities that affected Android, specifically to
information provided by Google's project zero at:
http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html
The attack presented therein, by Google's project zero, specifically
targeted the limited randomness used to generate the offset added to the
mmap_base address in order to craft a brute-force-based attack.
Concretely, the attack was against the mediaserver process, which was
limited to respawning every 5 seconds, on an arm device. The hard-coded
8 bits used resulted in an average expected success rate of defeating
the mmap ASLR after just over 10 minutes (128 tries at 5 seconds a
piece). With this patch, and an accompanying increase in the entropy
value to 16 bits, the same attack would take an average expected time of
over 45 hours (32768 tries), which makes it both less feasible and more
likely to be noticed.
The introduced Kconfig and sysctl options are limited by per-arch
minimum and maximum values, the minimum of which was chosen to match the
current hard-coded value and the maximum of which was chosen so as to
give the greatest flexibility without generating an invalid mmap_base
address, generally a 3-4 bits less than the number of bits in the
user-space accessible virtual address space.
When decided whether or not to change the default value, a system
developer should consider that mmap_base address could be placed
anywhere up to 2^(value) bits away from the non-randomized location,
which would introduce variable-sized areas above and below the mmap_base
address such that the maximum vm_area_struct size may be reduced,
preventing very large allocations.
This patch (of 4):
ASLR only uses as few as 8 bits to generate the random offset for the
mmap base address on 32 bit architectures. This value was chosen to
prevent a poorly chosen value from dividing the address space in such a
way as to prevent large allocations. This may not be an issue on all
platforms. Allow the specification of a minimum number of bits so that
platforms desiring greater ASLR protection may determine where to place
the trade-off.
Signed-off-by: Daniel Cashman <dcashman@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Cc: Jeff Vander Stoep <jeffv@google.com>
Cc: Nick Kralevich <nnk@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Borislav Petkov <bp@suse.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 07:19:53 +08:00
|
|
|
|
config ARCH_MMAP_RND_BITS_MIN
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_BITS_MAX
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_BITS_DEFAULT
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_BITS
|
|
|
|
|
int "Number of bits to use for ASLR of mmap base address" if EXPERT
|
|
|
|
|
range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
|
|
|
|
|
default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
|
|
|
|
|
default ARCH_MMAP_RND_BITS_MIN
|
|
|
|
|
depends on HAVE_ARCH_MMAP_RND_BITS
|
|
|
|
|
help
|
|
|
|
|
This value can be used to select the number of bits to use to
|
|
|
|
|
determine the random offset to the base address of vma regions
|
|
|
|
|
resulting from mmap allocations. This value will be bounded
|
|
|
|
|
by the architecture's minimum and maximum supported values.
|
|
|
|
|
|
|
|
|
|
This value can be changed after boot using the
|
|
|
|
|
/proc/sys/vm/mmap_rnd_bits tunable
|
|
|
|
|
|
|
|
|
|
config HAVE_ARCH_MMAP_RND_COMPAT_BITS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if it supports running applications
|
|
|
|
|
in compatibility mode, supports setting a variable number of bits for
|
|
|
|
|
use in establishing the base address for mmap allocations, has MMU
|
|
|
|
|
enabled and provides values for both:
|
|
|
|
|
- ARCH_MMAP_RND_COMPAT_BITS_MIN
|
|
|
|
|
- ARCH_MMAP_RND_COMPAT_BITS_MAX
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_COMPAT_BITS_MIN
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_COMPAT_BITS_MAX
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT
|
|
|
|
|
int
|
|
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_COMPAT_BITS
|
|
|
|
|
int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT
|
|
|
|
|
range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX
|
|
|
|
|
default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT
|
|
|
|
|
default ARCH_MMAP_RND_COMPAT_BITS_MIN
|
|
|
|
|
depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS
|
|
|
|
|
help
|
|
|
|
|
This value can be used to select the number of bits to use to
|
|
|
|
|
determine the random offset to the base address of vma regions
|
|
|
|
|
resulting from mmap allocations for compatible applications This
|
|
|
|
|
value will be bounded by the architecture's minimum and maximum
|
|
|
|
|
supported values.
|
|
|
|
|
|
|
|
|
|
This value can be changed after boot using the
|
|
|
|
|
/proc/sys/vm/mmap_rnd_compat_bits tunable
|
|
|
|
|
|
2017-03-06 22:17:19 +08:00
|
|
|
|
config HAVE_ARCH_COMPAT_MMAP_BASES
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
This allows 64bit applications to invoke 32-bit mmap() syscall
|
|
|
|
|
and vice-versa 32-bit applications to call 64-bit mmap().
|
|
|
|
|
Required for applications doing different bitness syscalls.
|
|
|
|
|
|
2021-11-27 23:44:40 +08:00
|
|
|
|
config PAGE_SIZE_LESS_THAN_64KB
|
|
|
|
|
def_bool y
|
|
|
|
|
depends on !ARM64_64K_PAGES
|
|
|
|
|
depends on !IA64_PAGE_SIZE_64KB
|
|
|
|
|
depends on !PAGE_SIZE_64KB
|
|
|
|
|
depends on !PARISC_PAGE_SIZE_64KB
|
|
|
|
|
depends on !PPC_64K_PAGES
|
2022-01-20 10:10:22 +08:00
|
|
|
|
depends on PAGE_SIZE_LESS_THAN_256KB
|
|
|
|
|
|
|
|
|
|
config PAGE_SIZE_LESS_THAN_256KB
|
|
|
|
|
def_bool y
|
2021-11-27 23:44:40 +08:00
|
|
|
|
depends on !PPC_256K_PAGES
|
|
|
|
|
depends on !PAGE_SIZE_256KB
|
|
|
|
|
|
2019-09-24 06:38:47 +08:00
|
|
|
|
# This allows to use a set of generic functions to determine mmap base
|
|
|
|
|
# address by giving priority to top-down scheme only if the process
|
|
|
|
|
# is not in legacy mode (compat task, unlimited stack size or
|
|
|
|
|
# sysctl_legacy_va_layout).
|
|
|
|
|
# Architecture that selects this option can provide its own version of:
|
|
|
|
|
# - STACK_RND_MASK
|
|
|
|
|
config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
|
|
|
|
|
bool
|
|
|
|
|
depends on MMU
|
2019-09-24 06:38:50 +08:00
|
|
|
|
select ARCH_HAS_ELF_RANDOMIZE
|
2019-09-24 06:38:47 +08:00
|
|
|
|
|
2022-04-19 00:50:36 +08:00
|
|
|
|
config HAVE_OBJTOOL
|
|
|
|
|
bool
|
|
|
|
|
|
2022-04-19 00:50:39 +08:00
|
|
|
|
config HAVE_JUMP_LABEL_HACK
|
|
|
|
|
bool
|
|
|
|
|
|
2022-04-19 00:50:40 +08:00
|
|
|
|
config HAVE_NOINSTR_HACK
|
|
|
|
|
bool
|
|
|
|
|
|
2022-04-19 00:50:42 +08:00
|
|
|
|
config HAVE_NOINSTR_VALIDATION
|
|
|
|
|
bool
|
|
|
|
|
|
2016-02-29 12:22:42 +08:00
|
|
|
|
config HAVE_STACK_VALIDATION
|
|
|
|
|
bool
|
|
|
|
|
help
|
2022-04-19 00:50:36 +08:00
|
|
|
|
Architecture supports objtool compile-time frame pointer rule
|
|
|
|
|
validation.
|
2016-02-29 12:22:42 +08:00
|
|
|
|
|
2017-02-14 09:42:28 +08:00
|
|
|
|
config HAVE_RELIABLE_STACKTRACE
|
|
|
|
|
bool
|
|
|
|
|
help
|
2020-03-06 14:28:45 +08:00
|
|
|
|
Architecture has either save_stack_trace_tsk_reliable() or
|
|
|
|
|
arch_stack_walk_reliable() function which only returns a stack trace
|
|
|
|
|
if it can guarantee the trace is reliable.
|
2017-02-14 09:42:28 +08:00
|
|
|
|
|
2016-05-27 10:11:51 +08:00
|
|
|
|
config HAVE_ARCH_HASH
|
|
|
|
|
bool
|
|
|
|
|
default n
|
|
|
|
|
help
|
|
|
|
|
If this is set, the architecture provides an <asm/hash.h>
|
|
|
|
|
file which provides platform-specific implementations of some
|
|
|
|
|
functions in <linux/hash.h> or fs/namei.c.
|
|
|
|
|
|
2019-01-15 12:18:56 +08:00
|
|
|
|
config HAVE_ARCH_NVRAM_OPS
|
|
|
|
|
bool
|
|
|
|
|
|
2016-05-28 06:08:27 +08:00
|
|
|
|
config ISA_BUS_API
|
|
|
|
|
def_bool ISA
|
|
|
|
|
|
2012-10-24 01:17:59 +08:00
|
|
|
|
#
|
|
|
|
|
# ABI hall of shame
|
|
|
|
|
#
|
|
|
|
|
config CLONE_BACKWARDS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has tls passed as the 4th argument of clone(2),
|
|
|
|
|
not the 5th one.
|
|
|
|
|
|
|
|
|
|
config CLONE_BACKWARDS2
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has the first two arguments of clone(2) swapped.
|
|
|
|
|
|
2013-08-14 07:00:53 +08:00
|
|
|
|
config CLONE_BACKWARDS3
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has tls passed as the 3rd argument of clone(2),
|
|
|
|
|
not the 5th one.
|
|
|
|
|
|
2012-11-26 12:12:10 +08:00
|
|
|
|
config ODD_RT_SIGACTION
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has unusual rt_sigaction(2) arguments
|
|
|
|
|
|
2012-12-26 05:04:12 +08:00
|
|
|
|
config OLD_SIGSUSPEND
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has old sigsuspend(2) syscall, of one-argument variety
|
|
|
|
|
|
|
|
|
|
config OLD_SIGSUSPEND3
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Even weirder antique ABI - three-argument sigsuspend(2)
|
|
|
|
|
|
2012-12-26 08:09:45 +08:00
|
|
|
|
config OLD_SIGACTION
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
Architecture has old sigaction(2) syscall. Nope, not the same
|
|
|
|
|
as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
|
|
|
|
|
but fairly different variant of sigaction(2), thanks to OSF/1
|
|
|
|
|
compatibility...
|
|
|
|
|
|
|
|
|
|
config COMPAT_OLD_SIGACTION
|
|
|
|
|
bool
|
|
|
|
|
|
2018-03-14 12:03:28 +08:00
|
|
|
|
config COMPAT_32BIT_TIME
|
2019-07-15 17:46:10 +08:00
|
|
|
|
bool "Provide system calls for 32-bit time_t"
|
|
|
|
|
default !64BIT || COMPAT
|
2018-03-14 12:03:28 +08:00
|
|
|
|
help
|
|
|
|
|
This enables 32 bit time_t support in addition to 64 bit time_t support.
|
|
|
|
|
This is relevant on all 32-bit architectures, and 64-bit architectures
|
|
|
|
|
as part of compat syscall handling.
|
|
|
|
|
|
2018-07-31 19:39:32 +08:00
|
|
|
|
config ARCH_NO_PREEMPT
|
|
|
|
|
bool
|
|
|
|
|
|
2021-04-22 23:41:17 +08:00
|
|
|
|
config ARCH_EPHEMERAL_INODES
|
|
|
|
|
def_bool n
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if it doesn't keep track of inode
|
|
|
|
|
instances on its own, but instead relies on something else (e.g. the
|
|
|
|
|
host kernel for an UML kernel).
|
|
|
|
|
|
2019-07-18 04:01:49 +08:00
|
|
|
|
config ARCH_SUPPORTS_RT
|
|
|
|
|
bool
|
|
|
|
|
|
lib/GCD.c: use binary GCD algorithm instead of Euclidean
The binary GCD algorithm is based on the following facts:
1. If a and b are all evens, then gcd(a,b) = 2 * gcd(a/2, b/2)
2. If a is even and b is odd, then gcd(a,b) = gcd(a/2, b)
3. If a and b are all odds, then gcd(a,b) = gcd((a-b)/2, b) = gcd((a+b)/2, b)
Even on x86 machines with reasonable division hardware, the binary
algorithm runs about 25% faster (80% the execution time) than the
division-based Euclidian algorithm.
On platforms like Alpha and ARMv6 where division is a function call to
emulation code, it's even more significant.
There are two variants of the code here, depending on whether a fast
__ffs (find least significant set bit) instruction is available. This
allows the unpredictable branches in the bit-at-a-time shifting loop to
be eliminated.
If fast __ffs is not available, the "even/odd" GCD variant is used.
I use the following code to benchmark:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#define swap(a, b) \
do { \
a ^= b; \
b ^= a; \
a ^= b; \
} while (0)
unsigned long gcd0(unsigned long a, unsigned long b)
{
unsigned long r;
if (a < b) {
swap(a, b);
}
if (b == 0)
return a;
while ((r = a % b) != 0) {
a = b;
b = r;
}
return b;
}
unsigned long gcd1(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
b >>= __builtin_ctzl(b);
for (;;) {
a >>= __builtin_ctzl(a);
if (a == b)
return a << __builtin_ctzl(r);
if (a < b)
swap(a, b);
a -= b;
}
}
unsigned long gcd2(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
r &= -r;
while (!(b & r))
b >>= 1;
for (;;) {
while (!(a & r))
a >>= 1;
if (a == b)
return a;
if (a < b)
swap(a, b);
a -= b;
a >>= 1;
if (a & r)
a += b;
a >>= 1;
}
}
unsigned long gcd3(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
b >>= __builtin_ctzl(b);
if (b == 1)
return r & -r;
for (;;) {
a >>= __builtin_ctzl(a);
if (a == 1)
return r & -r;
if (a == b)
return a << __builtin_ctzl(r);
if (a < b)
swap(a, b);
a -= b;
}
}
unsigned long gcd4(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
r &= -r;
while (!(b & r))
b >>= 1;
if (b == r)
return r;
for (;;) {
while (!(a & r))
a >>= 1;
if (a == r)
return r;
if (a == b)
return a;
if (a < b)
swap(a, b);
a -= b;
a >>= 1;
if (a & r)
a += b;
a >>= 1;
}
}
static unsigned long (*gcd_func[])(unsigned long a, unsigned long b) = {
gcd0, gcd1, gcd2, gcd3, gcd4,
};
#define TEST_ENTRIES (sizeof(gcd_func) / sizeof(gcd_func[0]))
#if defined(__x86_64__)
#define rdtscll(val) do { \
unsigned long __a,__d; \
__asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \
(val) = ((unsigned long long)__a) | (((unsigned long long)__d)<<32); \
} while(0)
static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
unsigned long a, unsigned long b, unsigned long *res)
{
unsigned long long start, end;
unsigned long long ret;
unsigned long gcd_res;
rdtscll(start);
gcd_res = gcd(a, b);
rdtscll(end);
if (end >= start)
ret = end - start;
else
ret = ~0ULL - start + 1 + end;
*res = gcd_res;
return ret;
}
#else
static inline struct timespec read_time(void)
{
struct timespec time;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time);
return time;
}
static inline unsigned long long diff_time(struct timespec start, struct timespec end)
{
struct timespec temp;
if ((end.tv_nsec - start.tv_nsec) < 0) {
temp.tv_sec = end.tv_sec - start.tv_sec - 1;
temp.tv_nsec = 1000000000ULL + end.tv_nsec - start.tv_nsec;
} else {
temp.tv_sec = end.tv_sec - start.tv_sec;
temp.tv_nsec = end.tv_nsec - start.tv_nsec;
}
return temp.tv_sec * 1000000000ULL + temp.tv_nsec;
}
static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
unsigned long a, unsigned long b, unsigned long *res)
{
struct timespec start, end;
unsigned long gcd_res;
start = read_time();
gcd_res = gcd(a, b);
end = read_time();
*res = gcd_res;
return diff_time(start, end);
}
#endif
static inline unsigned long get_rand()
{
if (sizeof(long) == 8)
return (unsigned long)rand() << 32 | rand();
else
return rand();
}
int main(int argc, char **argv)
{
unsigned int seed = time(0);
int loops = 100;
int repeats = 1000;
unsigned long (*res)[TEST_ENTRIES];
unsigned long long elapsed[TEST_ENTRIES];
int i, j, k;
for (;;) {
int opt = getopt(argc, argv, "n:r:s:");
/* End condition always first */
if (opt == -1)
break;
switch (opt) {
case 'n':
loops = atoi(optarg);
break;
case 'r':
repeats = atoi(optarg);
break;
case 's':
seed = strtoul(optarg, NULL, 10);
break;
default:
/* You won't actually get here. */
break;
}
}
res = malloc(sizeof(unsigned long) * TEST_ENTRIES * loops);
memset(elapsed, 0, sizeof(elapsed));
srand(seed);
for (j = 0; j < loops; j++) {
unsigned long a = get_rand();
/* Do we have args? */
unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
unsigned long long min_elapsed[TEST_ENTRIES];
for (k = 0; k < repeats; k++) {
for (i = 0; i < TEST_ENTRIES; i++) {
unsigned long long tmp = benchmark_gcd_func(gcd_func[i], a, b, &res[j][i]);
if (k == 0 || min_elapsed[i] > tmp)
min_elapsed[i] = tmp;
}
}
for (i = 0; i < TEST_ENTRIES; i++)
elapsed[i] += min_elapsed[i];
}
for (i = 0; i < TEST_ENTRIES; i++)
printf("gcd%d: elapsed %llu\n", i, elapsed[i]);
k = 0;
srand(seed);
for (j = 0; j < loops; j++) {
unsigned long a = get_rand();
unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
for (i = 1; i < TEST_ENTRIES; i++) {
if (res[j][i] != res[j][0])
break;
}
if (i < TEST_ENTRIES) {
if (k == 0) {
k = 1;
fprintf(stderr, "Error:\n");
}
fprintf(stderr, "gcd(%lu, %lu): ", a, b);
for (i = 0; i < TEST_ENTRIES; i++)
fprintf(stderr, "%ld%s", res[j][i], i < TEST_ENTRIES - 1 ? ", " : "\n");
}
}
if (k == 0)
fprintf(stderr, "PASS\n");
free(res);
return 0;
}
Compiled with "-O2", on "VirtualBox 4.4.0-22-generic #38-Ubuntu x86_64" got:
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 10174
gcd1: elapsed 2120
gcd2: elapsed 2902
gcd3: elapsed 2039
gcd4: elapsed 2812
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9309
gcd1: elapsed 2280
gcd2: elapsed 2822
gcd3: elapsed 2217
gcd4: elapsed 2710
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9589
gcd1: elapsed 2098
gcd2: elapsed 2815
gcd3: elapsed 2030
gcd4: elapsed 2718
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9914
gcd1: elapsed 2309
gcd2: elapsed 2779
gcd3: elapsed 2228
gcd4: elapsed 2709
PASS
[akpm@linux-foundation.org: avoid #defining a CONFIG_ variable]
Signed-off-by: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com>
Signed-off-by: George Spelvin <linux@horizon.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 08:03:57 +08:00
|
|
|
|
config CPU_NO_EFFICIENT_FFS
|
|
|
|
|
def_bool n
|
|
|
|
|
|
2016-08-11 17:35:21 +08:00
|
|
|
|
config HAVE_ARCH_VMAP_STACK
|
|
|
|
|
def_bool n
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if it can support kernel stacks
|
|
|
|
|
in vmalloc space. This means:
|
|
|
|
|
|
|
|
|
|
- vmalloc space must be large enough to hold many kernel stacks.
|
|
|
|
|
This may rule out many 32-bit architectures.
|
|
|
|
|
|
|
|
|
|
- Stacks in vmalloc space need to work reliably. For example, if
|
|
|
|
|
vmap page tables are created on demand, either this mechanism
|
|
|
|
|
needs to work while the stack points to a virtual address with
|
|
|
|
|
unpopulated page tables or arch code (switch_to() and switch_mm(),
|
|
|
|
|
most likely) needs to ensure that the stack's page table entries
|
|
|
|
|
are populated before running on a possibly unpopulated stack.
|
|
|
|
|
|
|
|
|
|
- If the stack overflows into a guard page, something reasonable
|
|
|
|
|
should happen. The definition of "reasonable" is flexible, but
|
|
|
|
|
instantly rebooting without logging anything would be unfriendly.
|
|
|
|
|
|
|
|
|
|
config VMAP_STACK
|
|
|
|
|
default y
|
|
|
|
|
bool "Use a virtually-mapped stack"
|
2019-12-01 09:54:57 +08:00
|
|
|
|
depends on HAVE_ARCH_VMAP_STACK
|
2020-12-23 04:02:45 +08:00
|
|
|
|
depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC
|
2020-06-14 00:50:22 +08:00
|
|
|
|
help
|
2016-08-11 17:35:21 +08:00
|
|
|
|
Enable this if you want the use virtually-mapped kernel stacks
|
|
|
|
|
with guard pages. This causes kernel stack overflows to be
|
|
|
|
|
caught immediately rather than causing difficult-to-diagnose
|
|
|
|
|
corruption.
|
|
|
|
|
|
2020-12-23 04:02:45 +08:00
|
|
|
|
To use this with software KASAN modes, the architecture must support
|
|
|
|
|
backing virtual mappings with real shadow memory, and KASAN_VMALLOC
|
|
|
|
|
must be enabled.
|
2016-08-11 17:35:21 +08:00
|
|
|
|
|
stack: Optionally randomize kernel stack offset each syscall
This provides the ability for architectures to enable kernel stack base
address offset randomization. This feature is controlled by the boot
param "randomize_kstack_offset=on/off", with its default value set by
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT.
This feature is based on the original idea from the last public release
of PaX's RANDKSTACK feature: https://pax.grsecurity.net/docs/randkstack.txt
All the credit for the original idea goes to the PaX team. Note that
the design and implementation of this upstream randomize_kstack_offset
feature differs greatly from the RANDKSTACK feature (see below).
Reasoning for the feature:
This feature aims to make harder the various stack-based attacks that
rely on deterministic stack structure. We have had many such attacks in
past (just to name few):
https://jon.oberheide.org/files/infiltrate12-thestackisback.pdf
https://jon.oberheide.org/files/stackjacking-infiltrate11.pdf
https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html
As Linux kernel stack protections have been constantly improving
(vmap-based stack allocation with guard pages, removal of thread_info,
STACKLEAK), attackers have had to find new ways for their exploits
to work. They have done so, continuing to rely on the kernel's stack
determinism, in situations where VMAP_STACK and THREAD_INFO_IN_TASK_STRUCT
were not relevant. For example, the following recent attacks would have
been hampered if the stack offset was non-deterministic between syscalls:
https://repositorio-aberto.up.pt/bitstream/10216/125357/2/374717.pdf
(page 70: targeting the pt_regs copy with linear stack overflow)
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
(leaked stack address from one syscall as a target during next syscall)
The main idea is that since the stack offset is randomized on each system
call, it is harder for an attack to reliably land in any particular place
on the thread stack, even with address exposures, as the stack base will
change on the next syscall. Also, since randomization is performed after
placing pt_regs, the ptrace-based approach[1] to discover the randomized
offset during a long-running syscall should not be possible.
Design description:
During most of the kernel's execution, it runs on the "thread stack",
which is pretty deterministic in its structure: it is fixed in size,
and on every entry from userspace to kernel on a syscall the thread
stack starts construction from an address fetched from the per-cpu
cpu_current_top_of_stack variable. The first element to be pushed to the
thread stack is the pt_regs struct that stores all required CPU registers
and syscall parameters. Finally the specific syscall function is called,
with the stack being used as the kernel executes the resulting request.
The goal of randomize_kstack_offset feature is to add a random offset
after the pt_regs has been pushed to the stack and before the rest of the
thread stack is used during the syscall processing, and to change it every
time a process issues a syscall. The source of randomness is currently
architecture-defined (but x86 is using the low byte of rdtsc()). Future
improvements for different entropy sources is possible, but out of scope
for this patch. Further more, to add more unpredictability, new offsets
are chosen at the end of syscalls (the timing of which should be less
easy to measure from userspace than at syscall entry time), and stored
in a per-CPU variable, so that the life of the value does not stay
explicitly tied to a single task.
As suggested by Andy Lutomirski, the offset is added using alloca()
and an empty asm() statement with an output constraint, since it avoids
changes to assembly syscall entry code, to the unwinder, and provides
correct stack alignment as defined by the compiler.
In order to make this available by default with zero performance impact
for those that don't want it, it is boot-time selectable with static
branches. This way, if the overhead is not wanted, it can just be
left turned off with no performance impact.
The generated assembly for x86_64 with GCC looks like this:
...
ffffffff81003977: 65 8b 05 02 ea 00 7f mov %gs:0x7f00ea02(%rip),%eax
# 12380 <kstack_offset>
ffffffff8100397e: 25 ff 03 00 00 and $0x3ff,%eax
ffffffff81003983: 48 83 c0 0f add $0xf,%rax
ffffffff81003987: 25 f8 07 00 00 and $0x7f8,%eax
ffffffff8100398c: 48 29 c4 sub %rax,%rsp
ffffffff8100398f: 48 8d 44 24 0f lea 0xf(%rsp),%rax
ffffffff81003994: 48 83 e0 f0 and $0xfffffffffffffff0,%rax
...
As a result of the above stack alignment, this patch introduces about
5 bits of randomness after pt_regs is spilled to the thread stack on
x86_64, and 6 bits on x86_32 (since its has 1 fewer bit required for
stack alignment). The amount of entropy could be adjusted based on how
much of the stack space we wish to trade for security.
My measure of syscall performance overhead (on x86_64):
lmbench: /usr/lib/lmbench/bin/x86_64-linux-gnu/lat_syscall -N 10000 null
randomize_kstack_offset=y Simple syscall: 0.7082 microseconds
randomize_kstack_offset=n Simple syscall: 0.7016 microseconds
So, roughly 0.9% overhead growth for a no-op syscall, which is very
manageable. And for people that don't want this, it's off by default.
There are two gotchas with using the alloca() trick. First,
compilers that have Stack Clash protection (-fstack-clash-protection)
enabled by default (e.g. Ubuntu[3]) add pagesize stack probes to
any dynamic stack allocations. While the randomization offset is
always less than a page, the resulting assembly would still contain
(unreachable!) probing routines, bloating the resulting assembly. To
avoid this, -fno-stack-clash-protection is unconditionally added to
the kernel Makefile since this is the only dynamic stack allocation in
the kernel (now that VLAs have been removed) and it is provably safe
from Stack Clash style attacks.
The second gotcha with alloca() is a negative interaction with
-fstack-protector*, in that it sees the alloca() as an array allocation,
which triggers the unconditional addition of the stack canary function
pre/post-amble which slows down syscalls regardless of the static
branch. In order to avoid adding this unneeded check and its associated
performance impact, architectures need to carefully remove uses of
-fstack-protector-strong (or -fstack-protector) in the compilation units
that use the add_random_kstack() macro and to audit the resulting stack
mitigation coverage (to make sure no desired coverage disappears). No
change is visible for this on x86 because the stack protector is already
unconditionally disabled for the compilation unit, but the change is
required on arm64. There is, unfortunately, no attribute that can be
used to disable stack protector for specific functions.
Comparison to PaX RANDKSTACK feature:
The RANDKSTACK feature randomizes the location of the stack start
(cpu_current_top_of_stack), i.e. including the location of pt_regs
structure itself on the stack. Initially this patch followed the same
approach, but during the recent discussions[2], it has been determined
to be of a little value since, if ptrace functionality is available for
an attacker, they can use PTRACE_PEEKUSR/PTRACE_POKEUSR to read/write
different offsets in the pt_regs struct, observe the cache behavior of
the pt_regs accesses, and figure out the random stack offset. Another
difference is that the random offset is stored in a per-cpu variable,
rather than having it be per-thread. As a result, these implementations
differ a fair bit in their implementation details and results, though
obviously the intent is similar.
[1] https://lore.kernel.org/kernel-hardening/2236FBA76BA1254E88B949DDB74E612BA4BC57C1@IRSMSX102.ger.corp.intel.com/
[2] https://lore.kernel.org/kernel-hardening/20190329081358.30497-1-elena.reshetova@intel.com/
[3] https://lists.ubuntu.com/archives/ubuntu-devel/2019-June/040741.html
Co-developed-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210401232347.2791257-4-keescook@chromium.org
2021-04-02 07:23:44 +08:00
|
|
|
|
config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
|
|
|
|
|
def_bool n
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol if it can support kernel stack
|
|
|
|
|
offset randomization with calls to add_random_kstack_offset()
|
|
|
|
|
during syscall entry and choose_random_kstack_offset() during
|
|
|
|
|
syscall exit. Careful removal of -fstack-protector-strong and
|
|
|
|
|
-fstack-protector should also be applied to the entry code and
|
|
|
|
|
closely examined, as the artificial stack bump looks like an array
|
|
|
|
|
to the compiler, so it will attempt to add canary checks regardless
|
|
|
|
|
of the static branch state.
|
|
|
|
|
|
2022-01-31 17:05:20 +08:00
|
|
|
|
config RANDOMIZE_KSTACK_OFFSET
|
|
|
|
|
bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT
|
|
|
|
|
default y
|
stack: Optionally randomize kernel stack offset each syscall
This provides the ability for architectures to enable kernel stack base
address offset randomization. This feature is controlled by the boot
param "randomize_kstack_offset=on/off", with its default value set by
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT.
This feature is based on the original idea from the last public release
of PaX's RANDKSTACK feature: https://pax.grsecurity.net/docs/randkstack.txt
All the credit for the original idea goes to the PaX team. Note that
the design and implementation of this upstream randomize_kstack_offset
feature differs greatly from the RANDKSTACK feature (see below).
Reasoning for the feature:
This feature aims to make harder the various stack-based attacks that
rely on deterministic stack structure. We have had many such attacks in
past (just to name few):
https://jon.oberheide.org/files/infiltrate12-thestackisback.pdf
https://jon.oberheide.org/files/stackjacking-infiltrate11.pdf
https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html
As Linux kernel stack protections have been constantly improving
(vmap-based stack allocation with guard pages, removal of thread_info,
STACKLEAK), attackers have had to find new ways for their exploits
to work. They have done so, continuing to rely on the kernel's stack
determinism, in situations where VMAP_STACK and THREAD_INFO_IN_TASK_STRUCT
were not relevant. For example, the following recent attacks would have
been hampered if the stack offset was non-deterministic between syscalls:
https://repositorio-aberto.up.pt/bitstream/10216/125357/2/374717.pdf
(page 70: targeting the pt_regs copy with linear stack overflow)
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
(leaked stack address from one syscall as a target during next syscall)
The main idea is that since the stack offset is randomized on each system
call, it is harder for an attack to reliably land in any particular place
on the thread stack, even with address exposures, as the stack base will
change on the next syscall. Also, since randomization is performed after
placing pt_regs, the ptrace-based approach[1] to discover the randomized
offset during a long-running syscall should not be possible.
Design description:
During most of the kernel's execution, it runs on the "thread stack",
which is pretty deterministic in its structure: it is fixed in size,
and on every entry from userspace to kernel on a syscall the thread
stack starts construction from an address fetched from the per-cpu
cpu_current_top_of_stack variable. The first element to be pushed to the
thread stack is the pt_regs struct that stores all required CPU registers
and syscall parameters. Finally the specific syscall function is called,
with the stack being used as the kernel executes the resulting request.
The goal of randomize_kstack_offset feature is to add a random offset
after the pt_regs has been pushed to the stack and before the rest of the
thread stack is used during the syscall processing, and to change it every
time a process issues a syscall. The source of randomness is currently
architecture-defined (but x86 is using the low byte of rdtsc()). Future
improvements for different entropy sources is possible, but out of scope
for this patch. Further more, to add more unpredictability, new offsets
are chosen at the end of syscalls (the timing of which should be less
easy to measure from userspace than at syscall entry time), and stored
in a per-CPU variable, so that the life of the value does not stay
explicitly tied to a single task.
As suggested by Andy Lutomirski, the offset is added using alloca()
and an empty asm() statement with an output constraint, since it avoids
changes to assembly syscall entry code, to the unwinder, and provides
correct stack alignment as defined by the compiler.
In order to make this available by default with zero performance impact
for those that don't want it, it is boot-time selectable with static
branches. This way, if the overhead is not wanted, it can just be
left turned off with no performance impact.
The generated assembly for x86_64 with GCC looks like this:
...
ffffffff81003977: 65 8b 05 02 ea 00 7f mov %gs:0x7f00ea02(%rip),%eax
# 12380 <kstack_offset>
ffffffff8100397e: 25 ff 03 00 00 and $0x3ff,%eax
ffffffff81003983: 48 83 c0 0f add $0xf,%rax
ffffffff81003987: 25 f8 07 00 00 and $0x7f8,%eax
ffffffff8100398c: 48 29 c4 sub %rax,%rsp
ffffffff8100398f: 48 8d 44 24 0f lea 0xf(%rsp),%rax
ffffffff81003994: 48 83 e0 f0 and $0xfffffffffffffff0,%rax
...
As a result of the above stack alignment, this patch introduces about
5 bits of randomness after pt_regs is spilled to the thread stack on
x86_64, and 6 bits on x86_32 (since its has 1 fewer bit required for
stack alignment). The amount of entropy could be adjusted based on how
much of the stack space we wish to trade for security.
My measure of syscall performance overhead (on x86_64):
lmbench: /usr/lib/lmbench/bin/x86_64-linux-gnu/lat_syscall -N 10000 null
randomize_kstack_offset=y Simple syscall: 0.7082 microseconds
randomize_kstack_offset=n Simple syscall: 0.7016 microseconds
So, roughly 0.9% overhead growth for a no-op syscall, which is very
manageable. And for people that don't want this, it's off by default.
There are two gotchas with using the alloca() trick. First,
compilers that have Stack Clash protection (-fstack-clash-protection)
enabled by default (e.g. Ubuntu[3]) add pagesize stack probes to
any dynamic stack allocations. While the randomization offset is
always less than a page, the resulting assembly would still contain
(unreachable!) probing routines, bloating the resulting assembly. To
avoid this, -fno-stack-clash-protection is unconditionally added to
the kernel Makefile since this is the only dynamic stack allocation in
the kernel (now that VLAs have been removed) and it is provably safe
from Stack Clash style attacks.
The second gotcha with alloca() is a negative interaction with
-fstack-protector*, in that it sees the alloca() as an array allocation,
which triggers the unconditional addition of the stack canary function
pre/post-amble which slows down syscalls regardless of the static
branch. In order to avoid adding this unneeded check and its associated
performance impact, architectures need to carefully remove uses of
-fstack-protector-strong (or -fstack-protector) in the compilation units
that use the add_random_kstack() macro and to audit the resulting stack
mitigation coverage (to make sure no desired coverage disappears). No
change is visible for this on x86 because the stack protector is already
unconditionally disabled for the compilation unit, but the change is
required on arm64. There is, unfortunately, no attribute that can be
used to disable stack protector for specific functions.
Comparison to PaX RANDKSTACK feature:
The RANDKSTACK feature randomizes the location of the stack start
(cpu_current_top_of_stack), i.e. including the location of pt_regs
structure itself on the stack. Initially this patch followed the same
approach, but during the recent discussions[2], it has been determined
to be of a little value since, if ptrace functionality is available for
an attacker, they can use PTRACE_PEEKUSR/PTRACE_POKEUSR to read/write
different offsets in the pt_regs struct, observe the cache behavior of
the pt_regs accesses, and figure out the random stack offset. Another
difference is that the random offset is stored in a per-cpu variable,
rather than having it be per-thread. As a result, these implementations
differ a fair bit in their implementation details and results, though
obviously the intent is similar.
[1] https://lore.kernel.org/kernel-hardening/2236FBA76BA1254E88B949DDB74E612BA4BC57C1@IRSMSX102.ger.corp.intel.com/
[2] https://lore.kernel.org/kernel-hardening/20190329081358.30497-1-elena.reshetova@intel.com/
[3] https://lists.ubuntu.com/archives/ubuntu-devel/2019-June/040741.html
Co-developed-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210401232347.2791257-4-keescook@chromium.org
2021-04-02 07:23:44 +08:00
|
|
|
|
depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
|
2022-01-31 17:05:21 +08:00
|
|
|
|
depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000
|
stack: Optionally randomize kernel stack offset each syscall
This provides the ability for architectures to enable kernel stack base
address offset randomization. This feature is controlled by the boot
param "randomize_kstack_offset=on/off", with its default value set by
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT.
This feature is based on the original idea from the last public release
of PaX's RANDKSTACK feature: https://pax.grsecurity.net/docs/randkstack.txt
All the credit for the original idea goes to the PaX team. Note that
the design and implementation of this upstream randomize_kstack_offset
feature differs greatly from the RANDKSTACK feature (see below).
Reasoning for the feature:
This feature aims to make harder the various stack-based attacks that
rely on deterministic stack structure. We have had many such attacks in
past (just to name few):
https://jon.oberheide.org/files/infiltrate12-thestackisback.pdf
https://jon.oberheide.org/files/stackjacking-infiltrate11.pdf
https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html
As Linux kernel stack protections have been constantly improving
(vmap-based stack allocation with guard pages, removal of thread_info,
STACKLEAK), attackers have had to find new ways for their exploits
to work. They have done so, continuing to rely on the kernel's stack
determinism, in situations where VMAP_STACK and THREAD_INFO_IN_TASK_STRUCT
were not relevant. For example, the following recent attacks would have
been hampered if the stack offset was non-deterministic between syscalls:
https://repositorio-aberto.up.pt/bitstream/10216/125357/2/374717.pdf
(page 70: targeting the pt_regs copy with linear stack overflow)
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
(leaked stack address from one syscall as a target during next syscall)
The main idea is that since the stack offset is randomized on each system
call, it is harder for an attack to reliably land in any particular place
on the thread stack, even with address exposures, as the stack base will
change on the next syscall. Also, since randomization is performed after
placing pt_regs, the ptrace-based approach[1] to discover the randomized
offset during a long-running syscall should not be possible.
Design description:
During most of the kernel's execution, it runs on the "thread stack",
which is pretty deterministic in its structure: it is fixed in size,
and on every entry from userspace to kernel on a syscall the thread
stack starts construction from an address fetched from the per-cpu
cpu_current_top_of_stack variable. The first element to be pushed to the
thread stack is the pt_regs struct that stores all required CPU registers
and syscall parameters. Finally the specific syscall function is called,
with the stack being used as the kernel executes the resulting request.
The goal of randomize_kstack_offset feature is to add a random offset
after the pt_regs has been pushed to the stack and before the rest of the
thread stack is used during the syscall processing, and to change it every
time a process issues a syscall. The source of randomness is currently
architecture-defined (but x86 is using the low byte of rdtsc()). Future
improvements for different entropy sources is possible, but out of scope
for this patch. Further more, to add more unpredictability, new offsets
are chosen at the end of syscalls (the timing of which should be less
easy to measure from userspace than at syscall entry time), and stored
in a per-CPU variable, so that the life of the value does not stay
explicitly tied to a single task.
As suggested by Andy Lutomirski, the offset is added using alloca()
and an empty asm() statement with an output constraint, since it avoids
changes to assembly syscall entry code, to the unwinder, and provides
correct stack alignment as defined by the compiler.
In order to make this available by default with zero performance impact
for those that don't want it, it is boot-time selectable with static
branches. This way, if the overhead is not wanted, it can just be
left turned off with no performance impact.
The generated assembly for x86_64 with GCC looks like this:
...
ffffffff81003977: 65 8b 05 02 ea 00 7f mov %gs:0x7f00ea02(%rip),%eax
# 12380 <kstack_offset>
ffffffff8100397e: 25 ff 03 00 00 and $0x3ff,%eax
ffffffff81003983: 48 83 c0 0f add $0xf,%rax
ffffffff81003987: 25 f8 07 00 00 and $0x7f8,%eax
ffffffff8100398c: 48 29 c4 sub %rax,%rsp
ffffffff8100398f: 48 8d 44 24 0f lea 0xf(%rsp),%rax
ffffffff81003994: 48 83 e0 f0 and $0xfffffffffffffff0,%rax
...
As a result of the above stack alignment, this patch introduces about
5 bits of randomness after pt_regs is spilled to the thread stack on
x86_64, and 6 bits on x86_32 (since its has 1 fewer bit required for
stack alignment). The amount of entropy could be adjusted based on how
much of the stack space we wish to trade for security.
My measure of syscall performance overhead (on x86_64):
lmbench: /usr/lib/lmbench/bin/x86_64-linux-gnu/lat_syscall -N 10000 null
randomize_kstack_offset=y Simple syscall: 0.7082 microseconds
randomize_kstack_offset=n Simple syscall: 0.7016 microseconds
So, roughly 0.9% overhead growth for a no-op syscall, which is very
manageable. And for people that don't want this, it's off by default.
There are two gotchas with using the alloca() trick. First,
compilers that have Stack Clash protection (-fstack-clash-protection)
enabled by default (e.g. Ubuntu[3]) add pagesize stack probes to
any dynamic stack allocations. While the randomization offset is
always less than a page, the resulting assembly would still contain
(unreachable!) probing routines, bloating the resulting assembly. To
avoid this, -fno-stack-clash-protection is unconditionally added to
the kernel Makefile since this is the only dynamic stack allocation in
the kernel (now that VLAs have been removed) and it is provably safe
from Stack Clash style attacks.
The second gotcha with alloca() is a negative interaction with
-fstack-protector*, in that it sees the alloca() as an array allocation,
which triggers the unconditional addition of the stack canary function
pre/post-amble which slows down syscalls regardless of the static
branch. In order to avoid adding this unneeded check and its associated
performance impact, architectures need to carefully remove uses of
-fstack-protector-strong (or -fstack-protector) in the compilation units
that use the add_random_kstack() macro and to audit the resulting stack
mitigation coverage (to make sure no desired coverage disappears). No
change is visible for this on x86 because the stack protector is already
unconditionally disabled for the compilation unit, but the change is
required on arm64. There is, unfortunately, no attribute that can be
used to disable stack protector for specific functions.
Comparison to PaX RANDKSTACK feature:
The RANDKSTACK feature randomizes the location of the stack start
(cpu_current_top_of_stack), i.e. including the location of pt_regs
structure itself on the stack. Initially this patch followed the same
approach, but during the recent discussions[2], it has been determined
to be of a little value since, if ptrace functionality is available for
an attacker, they can use PTRACE_PEEKUSR/PTRACE_POKEUSR to read/write
different offsets in the pt_regs struct, observe the cache behavior of
the pt_regs accesses, and figure out the random stack offset. Another
difference is that the random offset is stored in a per-cpu variable,
rather than having it be per-thread. As a result, these implementations
differ a fair bit in their implementation details and results, though
obviously the intent is similar.
[1] https://lore.kernel.org/kernel-hardening/2236FBA76BA1254E88B949DDB74E612BA4BC57C1@IRSMSX102.ger.corp.intel.com/
[2] https://lore.kernel.org/kernel-hardening/20190329081358.30497-1-elena.reshetova@intel.com/
[3] https://lists.ubuntu.com/archives/ubuntu-devel/2019-June/040741.html
Co-developed-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210401232347.2791257-4-keescook@chromium.org
2021-04-02 07:23:44 +08:00
|
|
|
|
help
|
|
|
|
|
The kernel stack offset can be randomized (after pt_regs) by
|
|
|
|
|
roughly 5 bits of entropy, frustrating memory corruption
|
|
|
|
|
attacks that depend on stack address determinism or
|
2022-01-31 17:05:20 +08:00
|
|
|
|
cross-syscall address exposures.
|
|
|
|
|
|
|
|
|
|
The feature is controlled via the "randomize_kstack_offset=on/off"
|
|
|
|
|
kernel boot param, and if turned off has zero overhead due to its use
|
|
|
|
|
of static branches (see JUMP_LABEL).
|
|
|
|
|
|
|
|
|
|
If unsure, say Y.
|
|
|
|
|
|
|
|
|
|
config RANDOMIZE_KSTACK_OFFSET_DEFAULT
|
|
|
|
|
bool "Default state of kernel stack offset randomization"
|
|
|
|
|
depends on RANDOMIZE_KSTACK_OFFSET
|
|
|
|
|
help
|
|
|
|
|
Kernel stack offset randomization is controlled by kernel boot param
|
|
|
|
|
"randomize_kstack_offset=on/off", and this config chooses the default
|
|
|
|
|
boot state.
|
stack: Optionally randomize kernel stack offset each syscall
This provides the ability for architectures to enable kernel stack base
address offset randomization. This feature is controlled by the boot
param "randomize_kstack_offset=on/off", with its default value set by
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT.
This feature is based on the original idea from the last public release
of PaX's RANDKSTACK feature: https://pax.grsecurity.net/docs/randkstack.txt
All the credit for the original idea goes to the PaX team. Note that
the design and implementation of this upstream randomize_kstack_offset
feature differs greatly from the RANDKSTACK feature (see below).
Reasoning for the feature:
This feature aims to make harder the various stack-based attacks that
rely on deterministic stack structure. We have had many such attacks in
past (just to name few):
https://jon.oberheide.org/files/infiltrate12-thestackisback.pdf
https://jon.oberheide.org/files/stackjacking-infiltrate11.pdf
https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html
As Linux kernel stack protections have been constantly improving
(vmap-based stack allocation with guard pages, removal of thread_info,
STACKLEAK), attackers have had to find new ways for their exploits
to work. They have done so, continuing to rely on the kernel's stack
determinism, in situations where VMAP_STACK and THREAD_INFO_IN_TASK_STRUCT
were not relevant. For example, the following recent attacks would have
been hampered if the stack offset was non-deterministic between syscalls:
https://repositorio-aberto.up.pt/bitstream/10216/125357/2/374717.pdf
(page 70: targeting the pt_regs copy with linear stack overflow)
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
(leaked stack address from one syscall as a target during next syscall)
The main idea is that since the stack offset is randomized on each system
call, it is harder for an attack to reliably land in any particular place
on the thread stack, even with address exposures, as the stack base will
change on the next syscall. Also, since randomization is performed after
placing pt_regs, the ptrace-based approach[1] to discover the randomized
offset during a long-running syscall should not be possible.
Design description:
During most of the kernel's execution, it runs on the "thread stack",
which is pretty deterministic in its structure: it is fixed in size,
and on every entry from userspace to kernel on a syscall the thread
stack starts construction from an address fetched from the per-cpu
cpu_current_top_of_stack variable. The first element to be pushed to the
thread stack is the pt_regs struct that stores all required CPU registers
and syscall parameters. Finally the specific syscall function is called,
with the stack being used as the kernel executes the resulting request.
The goal of randomize_kstack_offset feature is to add a random offset
after the pt_regs has been pushed to the stack and before the rest of the
thread stack is used during the syscall processing, and to change it every
time a process issues a syscall. The source of randomness is currently
architecture-defined (but x86 is using the low byte of rdtsc()). Future
improvements for different entropy sources is possible, but out of scope
for this patch. Further more, to add more unpredictability, new offsets
are chosen at the end of syscalls (the timing of which should be less
easy to measure from userspace than at syscall entry time), and stored
in a per-CPU variable, so that the life of the value does not stay
explicitly tied to a single task.
As suggested by Andy Lutomirski, the offset is added using alloca()
and an empty asm() statement with an output constraint, since it avoids
changes to assembly syscall entry code, to the unwinder, and provides
correct stack alignment as defined by the compiler.
In order to make this available by default with zero performance impact
for those that don't want it, it is boot-time selectable with static
branches. This way, if the overhead is not wanted, it can just be
left turned off with no performance impact.
The generated assembly for x86_64 with GCC looks like this:
...
ffffffff81003977: 65 8b 05 02 ea 00 7f mov %gs:0x7f00ea02(%rip),%eax
# 12380 <kstack_offset>
ffffffff8100397e: 25 ff 03 00 00 and $0x3ff,%eax
ffffffff81003983: 48 83 c0 0f add $0xf,%rax
ffffffff81003987: 25 f8 07 00 00 and $0x7f8,%eax
ffffffff8100398c: 48 29 c4 sub %rax,%rsp
ffffffff8100398f: 48 8d 44 24 0f lea 0xf(%rsp),%rax
ffffffff81003994: 48 83 e0 f0 and $0xfffffffffffffff0,%rax
...
As a result of the above stack alignment, this patch introduces about
5 bits of randomness after pt_regs is spilled to the thread stack on
x86_64, and 6 bits on x86_32 (since its has 1 fewer bit required for
stack alignment). The amount of entropy could be adjusted based on how
much of the stack space we wish to trade for security.
My measure of syscall performance overhead (on x86_64):
lmbench: /usr/lib/lmbench/bin/x86_64-linux-gnu/lat_syscall -N 10000 null
randomize_kstack_offset=y Simple syscall: 0.7082 microseconds
randomize_kstack_offset=n Simple syscall: 0.7016 microseconds
So, roughly 0.9% overhead growth for a no-op syscall, which is very
manageable. And for people that don't want this, it's off by default.
There are two gotchas with using the alloca() trick. First,
compilers that have Stack Clash protection (-fstack-clash-protection)
enabled by default (e.g. Ubuntu[3]) add pagesize stack probes to
any dynamic stack allocations. While the randomization offset is
always less than a page, the resulting assembly would still contain
(unreachable!) probing routines, bloating the resulting assembly. To
avoid this, -fno-stack-clash-protection is unconditionally added to
the kernel Makefile since this is the only dynamic stack allocation in
the kernel (now that VLAs have been removed) and it is provably safe
from Stack Clash style attacks.
The second gotcha with alloca() is a negative interaction with
-fstack-protector*, in that it sees the alloca() as an array allocation,
which triggers the unconditional addition of the stack canary function
pre/post-amble which slows down syscalls regardless of the static
branch. In order to avoid adding this unneeded check and its associated
performance impact, architectures need to carefully remove uses of
-fstack-protector-strong (or -fstack-protector) in the compilation units
that use the add_random_kstack() macro and to audit the resulting stack
mitigation coverage (to make sure no desired coverage disappears). No
change is visible for this on x86 because the stack protector is already
unconditionally disabled for the compilation unit, but the change is
required on arm64. There is, unfortunately, no attribute that can be
used to disable stack protector for specific functions.
Comparison to PaX RANDKSTACK feature:
The RANDKSTACK feature randomizes the location of the stack start
(cpu_current_top_of_stack), i.e. including the location of pt_regs
structure itself on the stack. Initially this patch followed the same
approach, but during the recent discussions[2], it has been determined
to be of a little value since, if ptrace functionality is available for
an attacker, they can use PTRACE_PEEKUSR/PTRACE_POKEUSR to read/write
different offsets in the pt_regs struct, observe the cache behavior of
the pt_regs accesses, and figure out the random stack offset. Another
difference is that the random offset is stored in a per-cpu variable,
rather than having it be per-thread. As a result, these implementations
differ a fair bit in their implementation details and results, though
obviously the intent is similar.
[1] https://lore.kernel.org/kernel-hardening/2236FBA76BA1254E88B949DDB74E612BA4BC57C1@IRSMSX102.ger.corp.intel.com/
[2] https://lore.kernel.org/kernel-hardening/20190329081358.30497-1-elena.reshetova@intel.com/
[3] https://lists.ubuntu.com/archives/ubuntu-devel/2019-June/040741.html
Co-developed-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210401232347.2791257-4-keescook@chromium.org
2021-04-02 07:23:44 +08:00
|
|
|
|
|
2017-02-07 08:31:57 +08:00
|
|
|
|
config ARCH_OPTIONAL_KERNEL_RWX
|
|
|
|
|
def_bool n
|
|
|
|
|
|
|
|
|
|
config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
|
|
|
|
|
def_bool n
|
|
|
|
|
|
|
|
|
|
config ARCH_HAS_STRICT_KERNEL_RWX
|
|
|
|
|
def_bool n
|
|
|
|
|
|
2017-02-07 08:31:58 +08:00
|
|
|
|
config STRICT_KERNEL_RWX
|
2017-02-07 08:31:57 +08:00
|
|
|
|
bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX
|
|
|
|
|
depends on ARCH_HAS_STRICT_KERNEL_RWX
|
|
|
|
|
default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
|
|
|
|
|
help
|
|
|
|
|
If this is set, kernel text and rodata memory will be made read-only,
|
|
|
|
|
and non-text memory will be made non-executable. This provides
|
|
|
|
|
protection against certain security exploits (e.g. executing the heap
|
|
|
|
|
or modifying text)
|
|
|
|
|
|
|
|
|
|
These features are considered standard security practice these days.
|
|
|
|
|
You should say Y here in almost all cases.
|
|
|
|
|
|
|
|
|
|
config ARCH_HAS_STRICT_MODULE_RWX
|
|
|
|
|
def_bool n
|
|
|
|
|
|
2017-02-07 08:31:58 +08:00
|
|
|
|
config STRICT_MODULE_RWX
|
2017-02-07 08:31:57 +08:00
|
|
|
|
bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX
|
|
|
|
|
depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES
|
|
|
|
|
default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
|
|
|
|
|
help
|
|
|
|
|
If this is set, module text and rodata memory will be made read-only,
|
|
|
|
|
and non-text memory will be made non-executable. This provides
|
|
|
|
|
protection against certain security exploits (e.g. writing to text)
|
|
|
|
|
|
2018-01-10 23:21:13 +08:00
|
|
|
|
# select if the architecture provides an asm/dma-direct.h header
|
|
|
|
|
config ARCH_HAS_PHYS_TO_DMA
|
|
|
|
|
bool
|
|
|
|
|
|
2018-08-21 06:36:17 +08:00
|
|
|
|
config HAVE_ARCH_COMPILER_H
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture can select this if it provides an
|
|
|
|
|
asm/compiler.h header that should be included after
|
|
|
|
|
linux/compiler-*.h in order to override macro definitions that those
|
|
|
|
|
headers generally provide.
|
|
|
|
|
|
arch: enable relative relocations for arm64, power and x86
Patch series "add support for relative references in special sections", v10.
This adds support for emitting special sections such as initcall arrays,
PCI fixups and tracepoints as relative references rather than absolute
references. This reduces the size by 50% on 64-bit architectures, but
more importantly, it removes the need for carrying relocation metadata for
these sections in relocatable kernels (e.g., for KASLR) that needs to be
fixed up at boot time. On arm64, this reduces the vmlinux footprint of
such a reference by 8x (8 byte absolute reference + 24 byte RELA entry vs
4 byte relative reference)
Patch #3 was sent out before as a single patch. This series supersedes
the previous submission. This version makes relative ksymtab entries
dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
than trying to infer from kbuild test robot replies for which
architectures it should be blacklisted.
Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
and sets it for the main architectures that are expected to benefit the
most from this feature, i.e., 64-bit architectures or ones that use
runtime relocations.
Patch #2 add support for #define'ing __DISABLE_EXPORTS to get rid of
ksymtab/kcrctab sections in decompressor and EFI stub objects when
rebuilding existing C files to run in a different context.
Patches #4 - #6 implement relative references for initcalls, PCI fixups
and tracepoints, respectively, all of which produce sections with order
~1000 entries on an arm64 defconfig kernel with tracing enabled. This
means we save about 28 KB of vmlinux space for each of these patches.
[From the v7 series blurb, which included the jump_label patches as well]:
For the arm64 kernel, all patches combined reduce the memory footprint
of vmlinux by about 1.3 MB (using a config copied from Ubuntu that has
KASLR enabled), of which ~1 MB is the size reduction of the RELA section
in .init, and the remaining 300 KB is reduction of .text/.data.
This patch (of 6):
Before updating certain subsystems to use place relative 32-bit
relocations in special sections, to save space and reduce the number of
absolute relocations that need to be processed at runtime by relocatable
kernels, introduce the Kconfig symbol and define it for some architectures
that should be able to support and benefit from it.
Link: http://lkml.kernel.org/r/20180704083651.24360-2-ard.biesheuvel@linaro.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: James Morris <jmorris@namei.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
Cc: James Morris <james.morris@microsoft.com>
Cc: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 12:56:00 +08:00
|
|
|
|
config HAVE_ARCH_PREL32_RELOCATIONS
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
May be selected by an architecture if it supports place-relative
|
|
|
|
|
32-bit relocations, both in the toolchain and in the module loader,
|
|
|
|
|
in which case relative references can be used in special sections
|
|
|
|
|
for PCI fixup, initcalls etc which are only half the size on 64 bit
|
|
|
|
|
architectures, and don't require runtime relocation on relocatable
|
|
|
|
|
kernels.
|
|
|
|
|
|
2019-02-02 17:41:17 +08:00
|
|
|
|
config ARCH_USE_MEMREMAP_PROT
|
|
|
|
|
bool
|
|
|
|
|
|
2019-04-05 01:43:17 +08:00
|
|
|
|
config LOCK_EVENT_COUNTS
|
|
|
|
|
bool "Locking event counts collection"
|
|
|
|
|
depends on DEBUG_FS
|
2020-06-14 00:50:22 +08:00
|
|
|
|
help
|
2019-04-05 01:43:17 +08:00
|
|
|
|
Enable light-weight counting of various locking related events
|
|
|
|
|
in the system with minimal performance impact. This reduces
|
|
|
|
|
the chance of application behavior change because of timing
|
|
|
|
|
differences. The counts are reported via debugfs.
|
|
|
|
|
|
2019-08-01 09:18:42 +08:00
|
|
|
|
# Select if the architecture has support for applying RELR relocations.
|
|
|
|
|
config ARCH_HAS_RELR
|
|
|
|
|
bool
|
|
|
|
|
|
|
|
|
|
config RELR
|
|
|
|
|
bool "Use RELR relocation packing"
|
|
|
|
|
depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR
|
|
|
|
|
default y
|
|
|
|
|
help
|
|
|
|
|
Store the kernel's dynamic relocations in the RELR relocation packing
|
|
|
|
|
format. Requires a compatible linker (LLD supports this feature), as
|
|
|
|
|
well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy
|
|
|
|
|
are compatible).
|
|
|
|
|
|
2019-08-06 12:49:14 +08:00
|
|
|
|
config ARCH_HAS_MEM_ENCRYPT
|
|
|
|
|
bool
|
|
|
|
|
|
2021-09-09 06:58:33 +08:00
|
|
|
|
config ARCH_HAS_CC_PLATFORM
|
|
|
|
|
bool
|
|
|
|
|
|
2019-11-16 07:44:42 +08:00
|
|
|
|
config HAVE_SPARSE_SYSCALL_NR
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this if its syscall numbering is sparse
|
|
|
|
|
to save space. For example, MIPS architecture has a syscall array with
|
|
|
|
|
entries at 4000, 5000 and 6000 locations. This option turns on syscall
|
|
|
|
|
related optimizations for a given architecture.
|
|
|
|
|
|
2020-08-04 23:01:22 +08:00
|
|
|
|
config ARCH_HAS_VDSO_DATA
|
|
|
|
|
bool
|
|
|
|
|
|
2020-08-18 21:57:41 +08:00
|
|
|
|
config HAVE_STATIC_CALL
|
|
|
|
|
bool
|
|
|
|
|
|
2020-08-18 21:57:42 +08:00
|
|
|
|
config HAVE_STATIC_CALL_INLINE
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_STATIC_CALL
|
2022-04-19 00:50:36 +08:00
|
|
|
|
select OBJTOOL
|
2020-08-18 21:57:42 +08:00
|
|
|
|
|
2021-01-18 22:12:19 +08:00
|
|
|
|
config HAVE_PREEMPT_DYNAMIC
|
|
|
|
|
bool
|
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:
* Hardware and software control flow integrity schemes can require the
addition of "landing pad" instructions (e.g. `BTI` for arm64), which
will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an
address into a register and perform an indirect branch (or at least
have a slow path that does so). This loses some of the benefits of
having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g.
requiring that we can recognise idiomatic codegen and remove
indirections understand, at least until clang proves more helpful
mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.
| <dynamic_cond_resched>:
| bti c
| b <dynamic_cond_resched+0x10> // or `nop`
| mov w0, #0x0
| ret
| mrs x0, sp_el0
| ldr x0, [x0, #8]
| cbnz x0, <dynamic_cond_resched+0x8>
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
... compared to the regular form of the function:
| <__cond_resched>:
| bti c
| mrs x0, sp_el0
| ldr x1, [x0, #8]
| cbz x1, <__cond_resched+0x18>
| mov w0, #0x0
| ret
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls. Since this is likely to have greater overhead than (inlined)
static calls, PREEMPT_DYNAMIC is only defaulted to enabled when
HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
2022-02-15 00:52:14 +08:00
|
|
|
|
|
|
|
|
|
config HAVE_PREEMPT_DYNAMIC_CALL
|
|
|
|
|
bool
|
2021-01-18 22:12:19 +08:00
|
|
|
|
depends on HAVE_STATIC_CALL
|
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:
* Hardware and software control flow integrity schemes can require the
addition of "landing pad" instructions (e.g. `BTI` for arm64), which
will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an
address into a register and perform an indirect branch (or at least
have a slow path that does so). This loses some of the benefits of
having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g.
requiring that we can recognise idiomatic codegen and remove
indirections understand, at least until clang proves more helpful
mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.
| <dynamic_cond_resched>:
| bti c
| b <dynamic_cond_resched+0x10> // or `nop`
| mov w0, #0x0
| ret
| mrs x0, sp_el0
| ldr x0, [x0, #8]
| cbnz x0, <dynamic_cond_resched+0x8>
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
... compared to the regular form of the function:
| <__cond_resched>:
| bti c
| mrs x0, sp_el0
| ldr x1, [x0, #8]
| cbz x1, <__cond_resched+0x18>
| mov w0, #0x0
| ret
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls. Since this is likely to have greater overhead than (inlined)
static calls, PREEMPT_DYNAMIC is only defaulted to enabled when
HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
2022-02-15 00:52:14 +08:00
|
|
|
|
select HAVE_PREEMPT_DYNAMIC
|
|
|
|
|
help
|
|
|
|
|
An architecture should select this if it can handle the preemption
|
|
|
|
|
model being selected at boot time using static calls.
|
|
|
|
|
|
|
|
|
|
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
|
|
|
|
|
preemption function will be patched directly.
|
|
|
|
|
|
|
|
|
|
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
|
|
|
|
|
call to a preemption function will go through a trampoline, and the
|
|
|
|
|
trampoline will be patched.
|
|
|
|
|
|
|
|
|
|
It is strongly advised to support inline static call to avoid any
|
|
|
|
|
overhead.
|
|
|
|
|
|
|
|
|
|
config HAVE_PREEMPT_DYNAMIC_KEY
|
|
|
|
|
bool
|
|
|
|
|
depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
|
|
|
|
|
select HAVE_PREEMPT_DYNAMIC
|
2021-01-18 22:12:19 +08:00
|
|
|
|
help
|
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:
* Hardware and software control flow integrity schemes can require the
addition of "landing pad" instructions (e.g. `BTI` for arm64), which
will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an
address into a register and perform an indirect branch (or at least
have a slow path that does so). This loses some of the benefits of
having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g.
requiring that we can recognise idiomatic codegen and remove
indirections understand, at least until clang proves more helpful
mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.
| <dynamic_cond_resched>:
| bti c
| b <dynamic_cond_resched+0x10> // or `nop`
| mov w0, #0x0
| ret
| mrs x0, sp_el0
| ldr x0, [x0, #8]
| cbnz x0, <dynamic_cond_resched+0x8>
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
... compared to the regular form of the function:
| <__cond_resched>:
| bti c
| mrs x0, sp_el0
| ldr x1, [x0, #8]
| cbz x1, <__cond_resched+0x18>
| mov w0, #0x0
| ret
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls. Since this is likely to have greater overhead than (inlined)
static calls, PREEMPT_DYNAMIC is only defaulted to enabled when
HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
2022-02-15 00:52:14 +08:00
|
|
|
|
An architecture should select this if it can handle the preemption
|
|
|
|
|
model being selected at boot time using static keys.
|
|
|
|
|
|
|
|
|
|
Each preemption function will be given an early return based on a
|
|
|
|
|
static key. This should have slightly lower overhead than non-inline
|
|
|
|
|
static calls, as this effectively inlines each trampoline into the
|
|
|
|
|
start of its callee. This may avoid redundant work, and may
|
|
|
|
|
integrate better with CFI schemes.
|
|
|
|
|
|
|
|
|
|
This will have greater overhead than using inline static calls as
|
|
|
|
|
the call to the preemption function cannot be entirely elided.
|
2021-01-18 22:12:19 +08:00
|
|
|
|
|
2020-11-20 04:46:56 +08:00
|
|
|
|
config ARCH_WANT_LD_ORPHAN_WARN
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
An arch should select this symbol once all linker sections are explicitly
|
|
|
|
|
included, size-asserted, or discarded in the linker scripts. This is
|
|
|
|
|
important because we never want expected sections to be placed heuristically
|
|
|
|
|
by the linker, since the locations of such sections can change between linker
|
|
|
|
|
versions.
|
|
|
|
|
|
2020-12-15 11:09:59 +08:00
|
|
|
|
config HAVE_ARCH_PFN_VALID
|
|
|
|
|
bool
|
|
|
|
|
|
2020-12-15 11:10:30 +08:00
|
|
|
|
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
|
|
|
|
|
bool
|
|
|
|
|
|
2022-01-15 06:06:37 +08:00
|
|
|
|
config ARCH_SUPPORTS_PAGE_TABLE_CHECK
|
|
|
|
|
bool
|
|
|
|
|
|
2020-12-01 06:30:59 +08:00
|
|
|
|
config ARCH_SPLIT_ARG64
|
|
|
|
|
bool
|
|
|
|
|
help
|
|
|
|
|
If a 32-bit architecture requires 64-bit arguments to be split into
|
|
|
|
|
pairs of 32-bit arguments, select this option.
|
|
|
|
|
|
2020-06-14 11:03:25 +08:00
|
|
|
|
config ARCH_HAS_ELFCORE_COMPAT
|
|
|
|
|
bool
|
|
|
|
|
|
2021-04-27 03:59:11 +08:00
|
|
|
|
config ARCH_HAS_PARANOID_L1D_FLUSH
|
|
|
|
|
bool
|
|
|
|
|
|
2021-10-22 06:55:05 +08:00
|
|
|
|
config DYNAMIC_SIGFRAME
|
|
|
|
|
bool
|
|
|
|
|
|
x86/sgx: Add an attribute for the amount of SGX memory in a NUMA node
== Problem ==
The amount of SGX memory on a system is determined by the BIOS and it
varies wildly between systems. It can be as small as dozens of MB's
and as large as many GB's on servers. Just like how applications need
to know how much regular RAM is available, enclave builders need to
know how much SGX memory an enclave can consume.
== Solution ==
Introduce a new sysfs file:
/sys/devices/system/node/nodeX/x86/sgx_total_bytes
to enumerate the amount of SGX memory available in each NUMA node.
This serves the same function for SGX as /proc/meminfo or
/sys/devices/system/node/nodeX/meminfo does for normal RAM.
'sgx_total_bytes' is needed today to help drive the SGX selftests.
SGX-specific swap code is exercised by creating overcommitted enclaves
which are larger than the physical SGX memory on the system. They
currently use a CPUID-based approach which can diverge from the actual
amount of SGX memory available. 'sgx_total_bytes' ensures that the
selftests can work efficiently and do not attempt stupid things like
creating a 100,000 MB enclave on a system with 128 MB of SGX memory.
== Implementation Details ==
Introduce CONFIG_HAVE_ARCH_NODE_DEV_GROUP opt-in flag to expose an
arch specific attribute group, and add an attribute for the amount of
SGX memory in bytes to each NUMA node:
== ABI Design Discussion ==
As opposed to the per-node ABI, a single, global ABI was considered.
However, this would prevent enclaves from being able to size
themselves so that they fit on a single NUMA node. Essentially, a
single value would rule out NUMA optimizations for enclaves.
Create a new "x86/" directory inside each "nodeX/" sysfs directory.
'sgx_total_bytes' is expected to be the first of at least a few
sgx-specific files to be placed in the new directory. Just scanning
/proc/meminfo, these are the no-brainers that we have for RAM, but we
need for SGX:
MemTotal: xxxx kB // sgx_total_bytes (implemented here)
MemFree: yyyy kB // sgx_free_bytes
SwapTotal: zzzz kB // sgx_swapped_bytes
So, at *least* three. I think we will eventually end up needing
something more along the lines of a dozen. A new directory (as
opposed to being in the nodeX/ "root") directory avoids cluttering the
root with several "sgx_*" files.
Place the new file in a new "nodeX/x86/" directory because SGX is
highly x86-specific. It is very unlikely that any other architecture
(or even non-Intel x86 vendor) will ever implement SGX. Using "sgx/"
as opposed to "x86/" was also considered. But, there is a real chance
this can get used for other arch-specific purposes.
[ dhansen: rewrite changelog ]
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20211116162116.93081-2-jarkko@kernel.org
2021-11-17 00:21:16 +08:00
|
|
|
|
# Select, if arch has a named attribute group bound to NUMA device nodes.
|
|
|
|
|
config HAVE_ARCH_NODE_DEV_GROUP
|
|
|
|
|
bool
|
|
|
|
|
|
2009-06-18 07:28:08 +08:00
|
|
|
|
source "kernel/gcov/Kconfig"
|
2018-07-05 14:24:12 +08:00
|
|
|
|
|
|
|
|
|
source "scripts/gcc-plugins/Kconfig"
|
2018-08-16 04:05:12 +08:00
|
|
|
|
|
2018-07-31 19:39:33 +08:00
|
|
|
|
endmenu
|