License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-23 13:26:29 +08:00
|
|
|
#ifndef _ASM_X86_EFI_H
|
|
|
|
#define _ASM_X86_EFI_H
|
2008-01-30 20:31:19 +08:00
|
|
|
|
2015-04-24 08:46:00 +08:00
|
|
|
#include <asm/fpu/api.h>
|
2016-04-26 04:07:11 +08:00
|
|
|
#include <asm/processor-flags.h>
|
2015-11-28 05:09:33 +08:00
|
|
|
#include <asm/tlb.h>
|
2018-02-19 18:50:54 +08:00
|
|
|
#include <asm/nospec-branch.h>
|
2018-03-12 16:44:56 +08:00
|
|
|
#include <asm/mmu_context.h>
|
2020-01-03 19:39:48 +08:00
|
|
|
#include <linux/build_bug.h>
|
2020-05-19 03:07:10 +08:00
|
|
|
#include <linux/kernel.h>
|
2020-06-09 12:32:42 +08:00
|
|
|
#include <linux/pgtable.h>
|
2015-03-03 14:48:50 +08:00
|
|
|
|
2020-01-21 00:23:21 +08:00
|
|
|
extern unsigned long efi_fw_vendor, efi_config_table;
|
|
|
|
|
2013-11-01 00:25:08 +08:00
|
|
|
/*
|
|
|
|
* We map the EFI regions needed for runtime services non-contiguously,
|
|
|
|
* with preserved alignment on virtual addresses starting from -4G down
|
|
|
|
* for a total max space of 64G. This way, we provide for stable runtime
|
|
|
|
* services addresses across kernels so that a kexec'd kernel can still
|
|
|
|
* use them.
|
|
|
|
*
|
|
|
|
* This is the main reason why we're doing stable VA mappings for RT
|
|
|
|
* services.
|
|
|
|
*/
|
|
|
|
|
x86/efi: Firmware agnostic handover entry points
The EFI handover code only works if the "bitness" of the firmware and
the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not
possible to mix the two. This goes against the tradition that a 32-bit
kernel can be loaded on a 64-bit BIOS platform without having to do
anything special in the boot loader. Linux distributions, for one thing,
regularly run only 32-bit kernels on their live media.
Despite having only one 'handover_offset' field in the kernel header,
EFI boot loaders use two separate entry points to enter the kernel based
on the architecture the boot loader was compiled for,
(1) 32-bit loader: handover_offset
(2) 64-bit loader: handover_offset + 512
Since we already have two entry points, we can leverage them to infer
the bitness of the firmware we're running on, without requiring any boot
loader modifications, by making (1) and (2) valid entry points for both
CONFIG_X86_32 and CONFIG_X86_64 kernels.
To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot
loader will always use (2). It's just that, if a single kernel image
supports (1) and (2) that image can be used with both 32-bit and 64-bit
boot loaders, and hence both 32-bit and 64-bit EFI.
(1) and (2) must be 512 bytes apart at all times, but that is already
part of the boot ABI and we could never change that delta without
breaking existing boot loaders anyhow.
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-01-10 23:54:31 +08:00
|
|
|
#define EFI32_LOADER_SIGNATURE "EL32"
|
|
|
|
#define EFI64_LOADER_SIGNATURE "EL64"
|
|
|
|
|
2016-04-26 04:07:11 +08:00
|
|
|
#define ARCH_EFI_IRQ_FLAGS_MASK X86_EFLAGS_IF
|
2008-01-30 20:31:19 +08:00
|
|
|
|
2020-01-03 19:39:48 +08:00
|
|
|
/*
|
|
|
|
* The EFI services are called through variadic functions in many cases. These
|
|
|
|
* functions are implemented in assembler and support only a fixed number of
|
|
|
|
* arguments. The macros below allows us to check at build time that we don't
|
|
|
|
* try to call them with too many arguments.
|
|
|
|
*
|
|
|
|
* __efi_nargs() will return the number of arguments if it is 7 or less, and
|
|
|
|
* cause a BUILD_BUG otherwise. The limitations of the C preprocessor make it
|
|
|
|
* impossible to calculate the exact number of arguments beyond some
|
|
|
|
* pre-defined limit. The maximum number of arguments currently supported by
|
|
|
|
* any of the thunks is 7, so this is good enough for now and can be extended
|
|
|
|
* in the obvious way if we ever need more.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define __efi_nargs(...) __efi_nargs_(__VA_ARGS__)
|
|
|
|
#define __efi_nargs_(...) __efi_nargs__(0, ##__VA_ARGS__, \
|
|
|
|
__efi_arg_sentinel(7), __efi_arg_sentinel(6), \
|
|
|
|
__efi_arg_sentinel(5), __efi_arg_sentinel(4), \
|
|
|
|
__efi_arg_sentinel(3), __efi_arg_sentinel(2), \
|
|
|
|
__efi_arg_sentinel(1), __efi_arg_sentinel(0))
|
|
|
|
#define __efi_nargs__(_0, _1, _2, _3, _4, _5, _6, _7, n, ...) \
|
|
|
|
__take_second_arg(n, \
|
|
|
|
({ BUILD_BUG_ON_MSG(1, "__efi_nargs limit exceeded"); 8; }))
|
|
|
|
#define __efi_arg_sentinel(n) , n
|
|
|
|
|
|
|
|
/*
|
|
|
|
* __efi_nargs_check(f, n, ...) will cause a BUILD_BUG if the ellipsis
|
|
|
|
* represents more than n arguments.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define __efi_nargs_check(f, n, ...) \
|
|
|
|
__efi_nargs_check_(f, __efi_nargs(__VA_ARGS__), n)
|
|
|
|
#define __efi_nargs_check_(f, p, n) __efi_nargs_check__(f, p, n)
|
|
|
|
#define __efi_nargs_check__(f, p, n) ({ \
|
|
|
|
BUILD_BUG_ON_MSG( \
|
|
|
|
(p) > (n), \
|
|
|
|
#f " called with too many arguments (" #p ">" #n ")"); \
|
|
|
|
})
|
|
|
|
|
2016-04-26 04:07:11 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
2018-02-19 18:50:54 +08:00
|
|
|
#define arch_efi_call_virt_setup() \
|
|
|
|
({ \
|
|
|
|
kernel_fpu_begin(); \
|
|
|
|
firmware_restrict_branch_speculation_start(); \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define arch_efi_call_virt_teardown() \
|
|
|
|
({ \
|
|
|
|
firmware_restrict_branch_speculation_end(); \
|
|
|
|
kernel_fpu_end(); \
|
|
|
|
})
|
|
|
|
|
2020-01-03 19:39:38 +08:00
|
|
|
#define arch_efi_call_virt(p, f, args...) p->f(args)
|
2014-03-28 06:10:41 +08:00
|
|
|
|
2008-01-30 20:31:19 +08:00
|
|
|
#else /* !CONFIG_X86_32 */
|
|
|
|
|
2014-03-28 06:10:39 +08:00
|
|
|
#define EFI_LOADER_SIGNATURE "EL64"
|
|
|
|
|
2020-01-03 19:39:48 +08:00
|
|
|
extern asmlinkage u64 __efi_call(void *fp, ...);
|
|
|
|
|
|
|
|
#define efi_call(...) ({ \
|
|
|
|
__efi_nargs_check(efi_call, 7, __VA_ARGS__); \
|
|
|
|
__efi_call(__VA_ARGS__); \
|
|
|
|
})
|
2014-03-28 06:10:39 +08:00
|
|
|
|
2015-11-28 05:09:33 +08:00
|
|
|
/*
|
2018-03-12 17:43:55 +08:00
|
|
|
* struct efi_scratch - Scratch space used while switching to/from efi_mm
|
|
|
|
* @phys_stack: stack used during EFI Mixed Mode
|
|
|
|
* @prev_mm: store/restore stolen mm_struct while switching to/from efi_mm
|
2015-11-28 05:09:33 +08:00
|
|
|
*/
|
|
|
|
struct efi_scratch {
|
2018-03-12 17:43:55 +08:00
|
|
|
u64 phys_stack;
|
|
|
|
struct mm_struct *prev_mm;
|
2015-11-28 05:09:33 +08:00
|
|
|
} __packed;
|
|
|
|
|
2016-04-26 04:07:06 +08:00
|
|
|
#define arch_efi_call_virt_setup() \
|
2013-11-01 00:25:08 +08:00
|
|
|
({ \
|
|
|
|
efi_sync_low_kernel_mappings(); \
|
2018-11-29 23:02:10 +08:00
|
|
|
kernel_fpu_begin(); \
|
2018-02-19 18:50:54 +08:00
|
|
|
firmware_restrict_branch_speculation_start(); \
|
2020-07-14 05:30:07 +08:00
|
|
|
efi_switch_mm(&efi_mm); \
|
2016-04-26 04:07:06 +08:00
|
|
|
})
|
|
|
|
|
2016-06-25 15:20:27 +08:00
|
|
|
#define arch_efi_call_virt(p, f, args...) \
|
|
|
|
efi_call((void *)p->f, args) \
|
2016-04-26 04:07:06 +08:00
|
|
|
|
|
|
|
#define arch_efi_call_virt_teardown() \
|
|
|
|
({ \
|
2020-07-14 05:30:07 +08:00
|
|
|
efi_switch_mm(efi_scratch.prev_mm); \
|
2018-02-19 18:50:54 +08:00
|
|
|
firmware_restrict_branch_speculation_end(); \
|
2018-11-29 23:02:10 +08:00
|
|
|
kernel_fpu_end(); \
|
2013-11-01 00:25:08 +08:00
|
|
|
})
|
|
|
|
|
2015-10-02 06:36:48 +08:00
|
|
|
#ifdef CONFIG_KASAN
|
2015-09-23 05:59:17 +08:00
|
|
|
/*
|
|
|
|
* CONFIG_KASAN may redefine memset to __memset. __memset function is present
|
|
|
|
* only in kernel binary. Since the EFI stub linked into a separate binary it
|
|
|
|
* doesn't have __memset(). So we should use standard memset from
|
|
|
|
* arch/x86/boot/compressed/string.c. The same applies to memcpy and memmove.
|
|
|
|
*/
|
|
|
|
#undef memcpy
|
|
|
|
#undef memset
|
|
|
|
#undef memmove
|
2015-10-02 06:36:48 +08:00
|
|
|
#endif
|
2015-09-23 05:59:17 +08:00
|
|
|
|
2008-01-30 20:31:19 +08:00
|
|
|
#endif /* CONFIG_X86_32 */
|
|
|
|
|
2013-11-01 00:25:08 +08:00
|
|
|
extern struct efi_scratch efi_scratch;
|
2014-09-08 01:42:17 +08:00
|
|
|
extern int __init efi_memblock_x86_reserve_range(void);
|
2015-09-30 18:20:00 +08:00
|
|
|
extern void __init efi_print_memmap(void);
|
2013-11-01 00:25:08 +08:00
|
|
|
extern void __init efi_map_region(efi_memory_desc_t *md);
|
2013-12-20 18:02:14 +08:00
|
|
|
extern void __init efi_map_region_fixed(efi_memory_desc_t *md);
|
2013-11-01 00:25:08 +08:00
|
|
|
extern void efi_sync_low_kernel_mappings(void);
|
2015-11-28 05:09:34 +08:00
|
|
|
extern int __init efi_alloc_page_tables(void);
|
2014-09-08 01:42:17 +08:00
|
|
|
extern int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages);
|
2016-02-17 20:36:05 +08:00
|
|
|
extern void __init efi_runtime_update_mappings(void);
|
2014-01-18 19:48:15 +08:00
|
|
|
extern void __init efi_dump_pagetable(void);
|
2014-03-05 00:02:17 +08:00
|
|
|
extern void __init efi_apply_memmap_quirks(void);
|
2014-06-02 20:18:35 +08:00
|
|
|
extern int __init efi_reuse_config(u64 tables, int nr_tables);
|
|
|
|
extern void efi_delete_dummy_variable(void);
|
2018-03-12 17:43:55 +08:00
|
|
|
extern void efi_switch_mm(struct mm_struct *mm);
|
2018-09-12 03:15:22 +08:00
|
|
|
extern void efi_recover_from_page_fault(unsigned long phys_addr);
|
2018-11-30 01:12:25 +08:00
|
|
|
extern void efi_free_boot_services(void);
|
2008-01-30 20:31:19 +08:00
|
|
|
|
2020-04-10 15:43:20 +08:00
|
|
|
/* kexec external ABI */
|
2013-12-20 18:02:19 +08:00
|
|
|
struct efi_setup_data {
|
|
|
|
u64 fw_vendor;
|
2020-04-10 15:43:20 +08:00
|
|
|
u64 __unused;
|
2013-12-20 18:02:19 +08:00
|
|
|
u64 tables;
|
|
|
|
u64 smbios;
|
|
|
|
u64 reserved[8];
|
|
|
|
};
|
|
|
|
|
|
|
|
extern u64 efi_setup;
|
|
|
|
|
2013-02-14 08:07:35 +08:00
|
|
|
#ifdef CONFIG_EFI
|
2020-01-03 19:39:48 +08:00
|
|
|
extern efi_status_t __efi64_thunk(u32, ...);
|
|
|
|
|
|
|
|
#define efi64_thunk(...) ({ \
|
|
|
|
__efi_nargs_check(efi64_thunk, 6, __VA_ARGS__); \
|
|
|
|
__efi64_thunk(__VA_ARGS__); \
|
|
|
|
})
|
2013-02-14 08:07:35 +08:00
|
|
|
|
2019-12-24 23:10:06 +08:00
|
|
|
static inline bool efi_is_mixed(void)
|
2013-02-14 08:07:35 +08:00
|
|
|
{
|
2019-12-24 23:10:06 +08:00
|
|
|
if (!IS_ENABLED(CONFIG_EFI_MIXED))
|
|
|
|
return false;
|
|
|
|
return IS_ENABLED(CONFIG_X86_64) && !efi_enabled(EFI_64BIT);
|
2013-02-14 08:07:35 +08:00
|
|
|
}
|
|
|
|
|
2014-01-11 02:52:06 +08:00
|
|
|
static inline bool efi_runtime_supported(void)
|
|
|
|
{
|
efi/x86: Re-disable RT services for 32-bit kernels running on 64-bit EFI
Commit a8147dba75b1 ("efi/x86: Rename efi_is_native() to efi_is_mixed()")
renamed and refactored efi_is_native() into efi_is_mixed(), but failed
to take into account that these are not diametrical opposites.
Mixed mode is a construct that permits 64-bit kernels to boot on 32-bit
firmware, but there is another non-native combination which is supported,
i.e., 32-bit kernels booting on 64-bit firmware, but only for boot and not
for runtime services. Also, mixed mode can be disabled in Kconfig, in
which case the 64-bit kernel can still be booted from 32-bit firmware,
but without access to runtime services.
Due to this oversight, efi_runtime_supported() now incorrectly returns
true for such configurations, resulting in crashes at boot. So fix this
by making efi_runtime_supported() aware of this.
As a side effect, some efi_thunk_xxx() stubs have become obsolete, so
remove them as well.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Matthew Garrett <mjg59@google.com>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20200103113953.9571-4-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-01-03 19:39:36 +08:00
|
|
|
if (IS_ENABLED(CONFIG_X86_64) == efi_enabled(EFI_64BIT))
|
2014-01-11 02:52:06 +08:00
|
|
|
return true;
|
|
|
|
|
efi/x86: Limit EFI old memory map to SGI UV machines
We carry a quirk in the x86 EFI code to switch back to an older
method of mapping the EFI runtime services memory regions, because
it was deemed risky at the time to implement a new method without
providing a fallback to the old method in case problems arose.
Such problems did arise, but they appear to be limited to SGI UV1
machines, and so these are the only ones for which the fallback gets
enabled automatically (via a DMI quirk). The fallback can be enabled
manually as well, by passing efi=old_map, but there is very little
evidence that suggests that this is something that is being relied
upon in the field.
Given that UV1 support is not enabled by default by the distros
(Ubuntu, Fedora), there is no point in carrying this fallback code
all the time if there are no other users. So let's move it into the
UV support code, and document that efi=old_map now requires this
support code to be enabled.
Note that efi=old_map has been used in the past on other SGI UV
machines to work around kernel regressions in production, so we
keep the option to enable it by hand, but only if the kernel was
built with UV support.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200113172245.27925-8-ardb@kernel.org
2020-01-14 01:22:39 +08:00
|
|
|
return IS_ENABLED(CONFIG_EFI_MIXED);
|
2014-01-11 02:52:06 +08:00
|
|
|
}
|
|
|
|
|
2014-01-03 11:56:49 +08:00
|
|
|
extern void parse_efi_setup(u64 phys_addr, u32 data_len);
|
2014-01-11 02:48:30 +08:00
|
|
|
|
2016-04-26 04:06:50 +08:00
|
|
|
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
|
|
|
|
|
2014-01-11 02:48:30 +08:00
|
|
|
extern void efi_thunk_runtime_setup(void);
|
2020-01-03 19:39:40 +08:00
|
|
|
efi_status_t efi_set_virtual_address_map(unsigned long memory_map_size,
|
|
|
|
unsigned long descriptor_size,
|
|
|
|
u32 descriptor_version,
|
2020-01-21 16:44:43 +08:00
|
|
|
efi_memory_desc_t *virtual_map,
|
|
|
|
unsigned long systab_phys);
|
2014-11-06 00:00:56 +08:00
|
|
|
|
|
|
|
/* arch specific definitions used by the stub code */
|
|
|
|
|
2020-05-04 23:02:48 +08:00
|
|
|
#ifdef CONFIG_EFI_MIXED
|
|
|
|
|
|
|
|
#define ARCH_HAS_EFISTUB_WRAPPERS
|
2020-04-17 03:15:23 +08:00
|
|
|
|
|
|
|
static inline bool efi_is_64bit(void)
|
|
|
|
{
|
2020-05-04 23:02:48 +08:00
|
|
|
extern const bool efi_is64;
|
|
|
|
|
|
|
|
return efi_is64;
|
2020-04-17 03:15:23 +08:00
|
|
|
}
|
2016-09-06 14:05:32 +08:00
|
|
|
|
2019-12-24 23:10:09 +08:00
|
|
|
static inline bool efi_is_native(void)
|
|
|
|
{
|
|
|
|
if (!IS_ENABLED(CONFIG_X86_64))
|
|
|
|
return true;
|
|
|
|
return efi_is_64bit();
|
|
|
|
}
|
|
|
|
|
|
|
|
#define efi_mixed_mode_cast(attr) \
|
|
|
|
__builtin_choose_expr( \
|
|
|
|
__builtin_types_compatible_p(u32, __typeof__(attr)), \
|
|
|
|
(unsigned long)(attr), (attr))
|
|
|
|
|
2019-12-24 23:10:22 +08:00
|
|
|
#define efi_table_attr(inst, attr) \
|
|
|
|
(efi_is_native() \
|
|
|
|
? inst->attr \
|
|
|
|
: (__typeof__(inst->attr)) \
|
|
|
|
efi_mixed_mode_cast(inst->mixed_mode.attr))
|
2016-11-13 05:32:35 +08:00
|
|
|
|
2020-01-03 19:39:49 +08:00
|
|
|
/*
|
|
|
|
* The following macros allow translating arguments if necessary from native to
|
|
|
|
* mixed mode. The use case for this is to initialize the upper 32 bits of
|
|
|
|
* output parameters, and where the 32-bit method requires a 64-bit argument,
|
|
|
|
* which must be split up into two arguments to be thunked properly.
|
|
|
|
*
|
|
|
|
* As examples, the AllocatePool boot service returns the address of the
|
|
|
|
* allocation, but it will not set the high 32 bits of the address. To ensure
|
|
|
|
* that the full 64-bit address is initialized, we zero-init the address before
|
|
|
|
* calling the thunk.
|
|
|
|
*
|
|
|
|
* The FreePages boot service takes a 64-bit physical address even in 32-bit
|
|
|
|
* mode. For the thunk to work correctly, a native 64-bit call of
|
|
|
|
* free_pages(addr, size)
|
|
|
|
* must be translated to
|
|
|
|
* efi64_thunk(free_pages, addr & U32_MAX, addr >> 32, size)
|
|
|
|
* so that the two 32-bit halves of addr get pushed onto the stack separately.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline void *efi64_zero_upper(void *p)
|
|
|
|
{
|
|
|
|
((u32 *)p)[1] = 0;
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
efi/libstub/x86: Use Exit() boot service to exit the stub on errors
Currently, we either return with an error [from efi_pe_entry()] or
enter a deadloop [in efi_main()] if any fatal errors occur during
execution of the EFI stub. Let's switch to calling the Exit() EFI boot
service instead in both cases, so that we
a) can get rid of the deadloop, and simply return to the boot manager
if any errors occur during execution of the stub, including during
the call to ExitBootServices(),
b) can also return cleanly from efi_pe_entry() or efi_main() in mixed
mode, once we introduce support for LoadImage/StartImage based mixed
mode in the next patch.
Note that on systems running downstream GRUBs [which do not use LoadImage
or StartImage to boot the kernel, and instead, pass their own image
handle as the loaded image handle], calling Exit() will exit from GRUB
rather than from the kernel, but this is a tolerable side effect.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-16 07:03:25 +08:00
|
|
|
static inline u32 efi64_convert_status(efi_status_t status)
|
|
|
|
{
|
|
|
|
return (u32)(status | (u64)status >> 32);
|
|
|
|
}
|
|
|
|
|
2020-01-03 19:39:49 +08:00
|
|
|
#define __efi64_argmap_free_pages(addr, size) \
|
|
|
|
((addr), 0, (size))
|
|
|
|
|
|
|
|
#define __efi64_argmap_get_memory_map(mm_size, mm, key, size, ver) \
|
|
|
|
((mm_size), (mm), efi64_zero_upper(key), efi64_zero_upper(size), (ver))
|
|
|
|
|
|
|
|
#define __efi64_argmap_allocate_pool(type, size, buffer) \
|
|
|
|
((type), (size), efi64_zero_upper(buffer))
|
|
|
|
|
2020-05-19 03:07:10 +08:00
|
|
|
#define __efi64_argmap_create_event(type, tpl, f, c, event) \
|
|
|
|
((type), (tpl), (f), (c), efi64_zero_upper(event))
|
|
|
|
|
|
|
|
#define __efi64_argmap_set_timer(event, type, time) \
|
|
|
|
((event), (type), lower_32_bits(time), upper_32_bits(time))
|
|
|
|
|
|
|
|
#define __efi64_argmap_wait_for_event(num, event, index) \
|
|
|
|
((num), (event), efi64_zero_upper(index))
|
|
|
|
|
2020-01-03 19:39:49 +08:00
|
|
|
#define __efi64_argmap_handle_protocol(handle, protocol, interface) \
|
|
|
|
((handle), (protocol), efi64_zero_upper(interface))
|
|
|
|
|
|
|
|
#define __efi64_argmap_locate_protocol(protocol, reg, interface) \
|
|
|
|
((protocol), (reg), efi64_zero_upper(interface))
|
|
|
|
|
2020-02-11 00:02:47 +08:00
|
|
|
#define __efi64_argmap_locate_device_path(protocol, path, handle) \
|
|
|
|
((protocol), (path), efi64_zero_upper(handle))
|
|
|
|
|
efi/libstub/x86: Use Exit() boot service to exit the stub on errors
Currently, we either return with an error [from efi_pe_entry()] or
enter a deadloop [in efi_main()] if any fatal errors occur during
execution of the EFI stub. Let's switch to calling the Exit() EFI boot
service instead in both cases, so that we
a) can get rid of the deadloop, and simply return to the boot manager
if any errors occur during execution of the stub, including during
the call to ExitBootServices(),
b) can also return cleanly from efi_pe_entry() or efi_main() in mixed
mode, once we introduce support for LoadImage/StartImage based mixed
mode in the next patch.
Note that on systems running downstream GRUBs [which do not use LoadImage
or StartImage to boot the kernel, and instead, pass their own image
handle as the loaded image handle], calling Exit() will exit from GRUB
rather than from the kernel, but this is a tolerable side effect.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-02-16 07:03:25 +08:00
|
|
|
#define __efi64_argmap_exit(handle, status, size, data) \
|
|
|
|
((handle), efi64_convert_status(status), (size), (data))
|
|
|
|
|
2020-01-03 19:39:50 +08:00
|
|
|
/* PCI I/O */
|
|
|
|
#define __efi64_argmap_get_location(protocol, seg, bus, dev, func) \
|
|
|
|
((protocol), efi64_zero_upper(seg), efi64_zero_upper(bus), \
|
|
|
|
efi64_zero_upper(dev), efi64_zero_upper(func))
|
|
|
|
|
2020-02-11 00:02:48 +08:00
|
|
|
/* LoadFile */
|
|
|
|
#define __efi64_argmap_load_file(protocol, path, policy, bufsize, buf) \
|
|
|
|
((protocol), (path), (policy), efi64_zero_upper(bufsize), (buf))
|
|
|
|
|
2020-03-20 10:00:24 +08:00
|
|
|
/* Graphics Output Protocol */
|
|
|
|
#define __efi64_argmap_query_mode(gop, mode, size, info) \
|
|
|
|
((gop), (mode), efi64_zero_upper(size), efi64_zero_upper(info))
|
|
|
|
|
2020-01-03 19:39:49 +08:00
|
|
|
/*
|
|
|
|
* The macros below handle the plumbing for the argument mapping. To add a
|
|
|
|
* mapping for a specific EFI method, simply define a macro
|
|
|
|
* __efi64_argmap_<method name>, following the examples above.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define __efi64_thunk_map(inst, func, ...) \
|
|
|
|
efi64_thunk(inst->mixed_mode.func, \
|
|
|
|
__efi64_argmap(__efi64_argmap_ ## func(__VA_ARGS__), \
|
|
|
|
(__VA_ARGS__)))
|
|
|
|
|
|
|
|
#define __efi64_argmap(mapped, args) \
|
|
|
|
__PASTE(__efi64_argmap__, __efi_nargs(__efi_eat mapped))(mapped, args)
|
|
|
|
#define __efi64_argmap__0(mapped, args) __efi_eval mapped
|
|
|
|
#define __efi64_argmap__1(mapped, args) __efi_eval args
|
|
|
|
|
|
|
|
#define __efi_eat(...)
|
|
|
|
#define __efi_eval(...) __VA_ARGS__
|
|
|
|
|
|
|
|
/* The three macros below handle dispatching via the thunk if needed */
|
|
|
|
|
2019-12-24 23:10:21 +08:00
|
|
|
#define efi_call_proto(inst, func, ...) \
|
2019-12-24 23:10:13 +08:00
|
|
|
(efi_is_native() \
|
2019-12-24 23:10:21 +08:00
|
|
|
? inst->func(inst, ##__VA_ARGS__) \
|
2020-01-03 19:39:49 +08:00
|
|
|
: __efi64_thunk_map(inst, func, inst, ##__VA_ARGS__))
|
2016-11-13 05:32:35 +08:00
|
|
|
|
efi/libstub: Rename efi_call_early/_runtime macros to be more intuitive
The macros efi_call_early and efi_call_runtime are used to call EFI
boot services and runtime services, respectively. However, the naming
is confusing, given that the early vs runtime distinction may suggest
that these are used for calling the same set of services either early
or late (== at runtime), while in reality, the sets of services they
can be used with are completely disjoint, and efi_call_runtime is also
only usable in 'early' code.
So do a global sweep to replace all occurrences with efi_bs_call or
efi_rt_call, respectively, where BS and RT match the idiom used by
the UEFI spec to refer to boot time or runtime services.
While at it, use 'func' as the macro parameter name for the function
pointers, which is less likely to collide and cause weird build errors.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20191224151025.32482-24-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-12-24 23:10:23 +08:00
|
|
|
#define efi_bs_call(func, ...) \
|
2019-12-24 23:10:13 +08:00
|
|
|
(efi_is_native() \
|
2020-04-17 00:38:06 +08:00
|
|
|
? efi_system_table->boottime->func(__VA_ARGS__) \
|
|
|
|
: __efi64_thunk_map(efi_table_attr(efi_system_table, \
|
|
|
|
boottime), \
|
|
|
|
func, __VA_ARGS__))
|
2014-11-06 00:00:56 +08:00
|
|
|
|
efi/libstub: Rename efi_call_early/_runtime macros to be more intuitive
The macros efi_call_early and efi_call_runtime are used to call EFI
boot services and runtime services, respectively. However, the naming
is confusing, given that the early vs runtime distinction may suggest
that these are used for calling the same set of services either early
or late (== at runtime), while in reality, the sets of services they
can be used with are completely disjoint, and efi_call_runtime is also
only usable in 'early' code.
So do a global sweep to replace all occurrences with efi_bs_call or
efi_rt_call, respectively, where BS and RT match the idiom used by
the UEFI spec to refer to boot time or runtime services.
While at it, use 'func' as the macro parameter name for the function
pointers, which is less likely to collide and cause weird build errors.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20191224151025.32482-24-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-12-24 23:10:23 +08:00
|
|
|
#define efi_rt_call(func, ...) \
|
2019-12-24 23:10:13 +08:00
|
|
|
(efi_is_native() \
|
2020-04-17 00:38:06 +08:00
|
|
|
? efi_system_table->runtime->func(__VA_ARGS__) \
|
|
|
|
: __efi64_thunk_map(efi_table_attr(efi_system_table, \
|
|
|
|
runtime), \
|
|
|
|
func, __VA_ARGS__))
|
2017-02-06 19:22:40 +08:00
|
|
|
|
2020-05-04 23:02:48 +08:00
|
|
|
#else /* CONFIG_EFI_MIXED */
|
|
|
|
|
|
|
|
static inline bool efi_is_64bit(void)
|
|
|
|
{
|
|
|
|
return IS_ENABLED(CONFIG_X86_64);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_EFI_MIXED */
|
|
|
|
|
2014-06-13 19:39:55 +08:00
|
|
|
extern bool efi_reboot_required(void);
|
2019-06-25 21:36:45 +08:00
|
|
|
extern bool efi_is_table_address(unsigned long phys_addr);
|
2014-06-13 19:39:55 +08:00
|
|
|
|
2019-11-07 09:43:05 +08:00
|
|
|
extern void efi_find_mirror(void);
|
|
|
|
extern void efi_reserve_boot_services(void);
|
2013-02-14 08:07:35 +08:00
|
|
|
#else
|
2014-01-03 11:56:49 +08:00
|
|
|
static inline void parse_efi_setup(u64 phys_addr, u32 data_len) {}
|
2014-06-13 19:39:55 +08:00
|
|
|
static inline bool efi_reboot_required(void)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2019-06-25 21:36:45 +08:00
|
|
|
static inline bool efi_is_table_address(unsigned long phys_addr)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2019-11-07 09:43:05 +08:00
|
|
|
static inline void efi_find_mirror(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
static inline void efi_reserve_boot_services(void)
|
|
|
|
{
|
|
|
|
}
|
2008-10-04 00:59:15 +08:00
|
|
|
#endif /* CONFIG_EFI */
|
|
|
|
|
2019-11-07 09:43:26 +08:00
|
|
|
#ifdef CONFIG_EFI_FAKE_MEMMAP
|
|
|
|
extern void __init efi_fake_memmap_early(void);
|
|
|
|
#else
|
|
|
|
static inline void efi_fake_memmap_early(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-10-23 13:26:29 +08:00
|
|
|
#endif /* _ASM_X86_EFI_H */
|