2012-05-05 16:18:41 +08:00
|
|
|
#!/bin/sh
|
License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2012-05-05 16:18:41 +08:00
|
|
|
#
|
|
|
|
# link vmlinux
|
|
|
|
#
|
2019-01-17 08:10:04 +08:00
|
|
|
# vmlinux is linked from the objects selected by $(KBUILD_VMLINUX_OBJS) and
|
|
|
|
# $(KBUILD_VMLINUX_LIBS). Most are built-in.a files from top-level directories
|
|
|
|
# in the kernel tree, others are specified in arch/$(ARCH)/Makefile.
|
|
|
|
# $(KBUILD_VMLINUX_LIBS) are archives which are linked conditionally
|
|
|
|
# (not within --whole-archive), and do not require symbol indexes added.
|
2012-05-05 16:18:41 +08:00
|
|
|
#
|
|
|
|
# vmlinux
|
|
|
|
# ^
|
|
|
|
# |
|
2019-01-17 08:10:04 +08:00
|
|
|
# +--< $(KBUILD_VMLINUX_OBJS)
|
|
|
|
# | +--< init/built-in.a drivers/built-in.a mm/built-in.a + more
|
2012-05-05 16:18:41 +08:00
|
|
|
# |
|
2017-06-19 23:52:05 +08:00
|
|
|
# +--< $(KBUILD_VMLINUX_LIBS)
|
|
|
|
# | +--< lib/lib.a + more
|
|
|
|
# |
|
2012-05-05 16:18:41 +08:00
|
|
|
# +-< ${kallsymso} (see description in KALLSYMS section)
|
|
|
|
#
|
|
|
|
# vmlinux version (uname -v) cannot be updated during normal
|
|
|
|
# descending-into-subdirs phase since we do not yet know if we need to
|
|
|
|
# update vmlinux.
|
|
|
|
# Therefore this step is delayed until just before final link of vmlinux.
|
|
|
|
#
|
|
|
|
# System.map is generated to document addresses of all kernel symbols
|
|
|
|
|
|
|
|
# Error out on error
|
|
|
|
set -e
|
|
|
|
|
kbuild: do not export LDFLAGS_vmlinux
When you clean the build tree for ARCH=arm, you may see the following
error message from 'nm' command:
$ make -j24 ARCH=arm clean
CLEAN arch/arm/crypto
CLEAN arch/arm/kernel
CLEAN arch/arm/mach-at91
CLEAN arch/arm/mach-omap2
CLEAN arch/arm/vdso
CLEAN certs
CLEAN lib
CLEAN usr
CLEAN net/wireless
CLEAN drivers/firmware/efi/libstub
nm: 'arch/arm/boot/compressed/../../../../vmlinux': No such file
/bin/sh: 1: arithmetic expression: expecting primary: " "
CLEAN arch/arm/boot/compressed
CLEAN drivers/scsi
CLEAN drivers/tty/vt
CLEAN arch/arm/boot
CLEAN vmlinux.symvers modules.builtin modules.builtin.modinfo
Even if you rerun the same command, the error message will not be
shown despite vmlinux is already gone.
To reproduce it, the parallel option -j is needed. Single thread
cleaning always executes 'archclean', 'vmlinuxclean' in this order,
so vmlinux still exists when arch/arm/boot/compressed/ is cleaned.
Looking at arch/arm/boot/compressed/Makefile does not help understand
the reason of the error message. Both KBSS_SZ and LDFLAGS_vmlinux are
assigned with '=' operator, hence, they are not expanded unless used.
Obviously, 'make clean' does not use them.
In fact, the root cause exists in the top Makefile:
export LDFLAGS_vmlinux
Since LDFLAGS_vmlinux is an exported variable, LDFLAGS_vmlinux in
arch/arm/boot/compressed/Makefile is expanded when scripts/Makefile.clean
has a command to execute. This is why the error message shows up only
when there exist build artifacts in arch/arm/boot/compressed/.
Adding 'unexport LDFLAGS_vmlinux' to arch/arm/boot/compressed/Makefile
will fix it as far as ARCH=arm is concerned, but I think the proper fix
is to get rid of 'export LDFLAGS_vmlinux' from the top Makefile.
LDFLAGS_vmlinux in the top Makefile contains linker flags for the top
vmlinux. LDFLAGS_vmlinux in arch/arm/boot/compressed/Makefile is for
arch/arm/boot/compressed/vmlinux. They just happen to have the same
variable name, but are used for different purposes. Stop shadowing
LDFLAGS_vmlinux.
This commit passes LDFLAGS_vmlinux to scripts/link-vmlinux.sh via a
command line parameter instead of via an environment variable. LD and
KBUILD_LDFLAGS are exported, but I did the same for consistency. Anyway,
they must be included in cmd_link-vmlinux to allow if_changed to detect
the changes in LD or KBUILD_LDFLAGS.
The following Makefiles are not affected:
arch/arm/boot/compressed/Makefile
arch/h8300/boot/compressed/Makefile
arch/nios2/boot/compressed/Makefile
arch/parisc/boot/compressed/Makefile
arch/s390/boot/compressed/Makefile
arch/sh/boot/compressed/Makefile
arch/sh/boot/romimage/Makefile
arch/x86/boot/compressed/Makefile
They use ':=' or '=' to clear the LDFLAGS_vmlinux inherited from the
top Makefile.
We need to take a closer look at the impact to unicore32 and xtensa.
arch/unicore32/boot/compressed/Makefile only uses '+=' operator for
LDFLAGS_vmlinux. So, the decompressor previously inherited the linker
flags from the top Makefile.
However, commit 70fac51feaf2 ("unicore32 additional architecture files:
boot process") was merged before commit 1f2bfbd00e46 ("kbuild: link of
vmlinux moved to a script"). So, I rather consider this is a bug fix of
1f2bfbd00e46.
arch/xtensa/boot/boot-elf/Makefile is also affected, but this is also
considered a fix for the same reason. It did not inherit LDFLAGS_vmlinux
when commit 4bedea945451 ("[PATCH] xtensa: Architecture support for
Tensilica Xtensa Part 2") was merged. I deleted $(LDFLAGS_vmlinux),
which is now empty.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
2020-07-02 03:29:36 +08:00
|
|
|
LD="$1"
|
|
|
|
KBUILD_LDFLAGS="$2"
|
|
|
|
LDFLAGS_vmlinux="$3"
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Nice output in kbuild format
|
|
|
|
# Will be supressed by "make -s"
|
|
|
|
info()
|
|
|
|
{
|
|
|
|
if [ "${quiet}" != "silent_" ]; then
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
printf " %-7s %s\n" "${1}" "${2}"
|
2012-05-05 16:18:41 +08:00
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
2020-12-12 02:46:24 +08:00
|
|
|
# Generate a linker script to ensure correct ordering of initcalls.
|
|
|
|
gen_initcalls()
|
|
|
|
{
|
|
|
|
info GEN .tmp_initcalls.lds
|
|
|
|
|
|
|
|
${PYTHON} ${srctree}/scripts/jobserver-exec \
|
|
|
|
${PERL} ${srctree}/scripts/generate_initcall_order.pl \
|
|
|
|
${KBUILD_VMLINUX_OBJS} ${KBUILD_VMLINUX_LIBS} \
|
|
|
|
> .tmp_initcalls.lds
|
|
|
|
}
|
|
|
|
|
2020-12-12 02:46:20 +08:00
|
|
|
# If CONFIG_LTO_CLANG is selected, collect generated symbol versions into
|
|
|
|
# .tmp_symversions.lds
|
|
|
|
gen_symversions()
|
|
|
|
{
|
|
|
|
info GEN .tmp_symversions.lds
|
|
|
|
rm -f .tmp_symversions.lds
|
|
|
|
|
|
|
|
for o in ${KBUILD_VMLINUX_OBJS} ${KBUILD_VMLINUX_LIBS}; do
|
|
|
|
if [ -f ${o}.symversions ]; then
|
|
|
|
cat ${o}.symversions >> .tmp_symversions.lds
|
|
|
|
fi
|
|
|
|
done
|
|
|
|
}
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Link of vmlinux.o used for section mismatch analysis
|
|
|
|
# ${1} output file
|
|
|
|
modpost_link()
|
|
|
|
{
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
local objects
|
2020-12-12 02:46:20 +08:00
|
|
|
local lds=""
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
|
2018-02-10 22:25:03 +08:00
|
|
|
objects="--whole-archive \
|
2019-01-17 08:10:04 +08:00
|
|
|
${KBUILD_VMLINUX_OBJS} \
|
2018-02-10 22:25:03 +08:00
|
|
|
--no-whole-archive \
|
|
|
|
--start-group \
|
|
|
|
${KBUILD_VMLINUX_LIBS} \
|
|
|
|
--end-group"
|
|
|
|
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
if [ -n "${CONFIG_LTO_CLANG}" ]; then
|
2020-12-12 02:46:24 +08:00
|
|
|
gen_initcalls
|
|
|
|
lds="-T .tmp_initcalls.lds"
|
|
|
|
|
2020-12-12 02:46:20 +08:00
|
|
|
if [ -n "${CONFIG_MODVERSIONS}" ]; then
|
|
|
|
gen_symversions
|
|
|
|
lds="${lds} -T .tmp_symversions.lds"
|
|
|
|
fi
|
|
|
|
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
# This might take a while, so indicate that we're doing
|
|
|
|
# an LTO link
|
|
|
|
info LTO ${1}
|
|
|
|
else
|
|
|
|
info LD ${1}
|
|
|
|
fi
|
|
|
|
|
2020-12-12 02:46:20 +08:00
|
|
|
${LD} ${KBUILD_LDFLAGS} -r -o ${1} ${lds} ${objects}
|
2012-05-05 16:18:41 +08:00
|
|
|
}
|
|
|
|
|
2020-03-18 20:33:54 +08:00
|
|
|
objtool_link()
|
|
|
|
{
|
2020-04-14 07:10:13 +08:00
|
|
|
local objtoolcmd;
|
2020-03-18 20:33:54 +08:00
|
|
|
local objtoolopt;
|
|
|
|
|
2020-04-14 07:10:13 +08:00
|
|
|
if [ "${CONFIG_LTO_CLANG} ${CONFIG_STACK_VALIDATION}" = "y y" ]; then
|
|
|
|
# Don't perform vmlinux validation unless explicitly requested,
|
|
|
|
# but run objtool on vmlinux.o now that we have an object file.
|
|
|
|
if [ -n "${CONFIG_UNWINDER_ORC}" ]; then
|
|
|
|
objtoolcmd="orc generate"
|
|
|
|
fi
|
|
|
|
|
|
|
|
objtoolopt="${objtoolopt} --duplicate"
|
|
|
|
|
|
|
|
if [ -n "${CONFIG_FTRACE_MCOUNT_USE_OBJTOOL}" ]; then
|
|
|
|
objtoolopt="${objtoolopt} --mcount"
|
|
|
|
fi
|
|
|
|
fi
|
|
|
|
|
2020-03-18 20:33:54 +08:00
|
|
|
if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then
|
2020-04-14 07:10:13 +08:00
|
|
|
objtoolopt="${objtoolopt} --noinstr"
|
|
|
|
fi
|
|
|
|
|
|
|
|
if [ -n "${objtoolopt}" ]; then
|
|
|
|
if [ -z "${objtoolcmd}" ]; then
|
|
|
|
objtoolcmd="check"
|
|
|
|
fi
|
|
|
|
objtoolopt="${objtoolopt} --vmlinux"
|
2020-03-18 20:33:54 +08:00
|
|
|
if [ -z "${CONFIG_FRAME_POINTER}" ]; then
|
|
|
|
objtoolopt="${objtoolopt} --no-fp"
|
|
|
|
fi
|
2020-04-14 07:10:13 +08:00
|
|
|
if [ -n "${CONFIG_GCOV_KERNEL}" ] || [ -n "${CONFIG_LTO_CLANG}" ]; then
|
2020-03-18 20:33:54 +08:00
|
|
|
objtoolopt="${objtoolopt} --no-unreachable"
|
|
|
|
fi
|
|
|
|
if [ -n "${CONFIG_RETPOLINE}" ]; then
|
|
|
|
objtoolopt="${objtoolopt} --retpoline"
|
|
|
|
fi
|
|
|
|
if [ -n "${CONFIG_X86_SMAP}" ]; then
|
|
|
|
objtoolopt="${objtoolopt} --uaccess"
|
|
|
|
fi
|
|
|
|
info OBJTOOL ${1}
|
2020-04-14 07:10:13 +08:00
|
|
|
tools/objtool/objtool ${objtoolcmd} ${objtoolopt} ${1}
|
2020-03-18 20:33:54 +08:00
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Link of vmlinux
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
# ${1} - output file
|
2019-09-06 01:59:38 +08:00
|
|
|
# ${2}, ${3}, ... - optional extra .o files
|
2012-05-05 16:18:41 +08:00
|
|
|
vmlinux_link()
|
|
|
|
{
|
|
|
|
local lds="${objtree}/${KBUILD_LDS}"
|
2019-09-06 01:59:38 +08:00
|
|
|
local output=${1}
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
local objects
|
2020-03-04 10:18:34 +08:00
|
|
|
local strip_debug
|
2012-05-05 16:18:41 +08:00
|
|
|
|
2019-09-20 23:36:47 +08:00
|
|
|
info LD ${output}
|
|
|
|
|
2019-09-06 01:59:38 +08:00
|
|
|
# skip output file argument
|
|
|
|
shift
|
|
|
|
|
2020-03-04 10:18:34 +08:00
|
|
|
# The kallsyms linking does not need debug symbols included.
|
|
|
|
if [ "$output" != "${output#.tmp_vmlinux.kallsyms}" ] ; then
|
|
|
|
strip_debug=-Wl,--strip-debug
|
|
|
|
fi
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
if [ "${SRCARCH}" != "um" ]; then
|
kbuild: add support for Clang LTO
This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:
https://llvm.org/docs/LinkTimeOptimization.html
The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:
- compiling with Clang,
- compiling all assembly code with Clang's integrated assembler,
- and linking with LLD.
While using CONFIG_LTO_CLANG_FULL results in the best runtime
performance, the compilation is not scalable in time or
memory. CONFIG_LTO_CLANG_THIN enables ThinLTO, which allows
parallel optimization and faster incremental builds. ThinLTO is
used by default if the architecture also selects
ARCH_SUPPORTS_LTO_CLANG_THIN:
https://clang.llvm.org/docs/ThinLTO.html
To enable LTO, LLVM tools must be used to handle bitcode files, by
passing LLVM=1 and LLVM_IAS=1 options to make:
$ make LLVM=1 LLVM_IAS=1 defconfig
$ scripts/config -e LTO_CLANG_THIN
$ make LLVM=1 LLVM_IAS=1
To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20201211184633.3213045-3-samitolvanen@google.com
2020-12-12 02:46:19 +08:00
|
|
|
if [ -n "${CONFIG_LTO_CLANG}" ]; then
|
|
|
|
# Use vmlinux.o instead of performing the slow LTO
|
|
|
|
# link again.
|
|
|
|
objects="--whole-archive \
|
|
|
|
vmlinux.o \
|
|
|
|
--no-whole-archive \
|
|
|
|
${@}"
|
|
|
|
else
|
|
|
|
objects="--whole-archive \
|
|
|
|
${KBUILD_VMLINUX_OBJS} \
|
|
|
|
--no-whole-archive \
|
|
|
|
--start-group \
|
|
|
|
${KBUILD_VMLINUX_LIBS} \
|
|
|
|
--end-group \
|
|
|
|
${@}"
|
|
|
|
fi
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
|
2019-09-06 01:59:38 +08:00
|
|
|
${LD} ${KBUILD_LDFLAGS} ${LDFLAGS_vmlinux} \
|
2020-03-04 10:18:34 +08:00
|
|
|
${strip_debug#-Wl,} \
|
2019-09-06 01:59:38 +08:00
|
|
|
-o ${output} \
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
-T ${lds} ${objects}
|
2012-05-05 16:18:41 +08:00
|
|
|
else
|
2018-02-10 22:25:03 +08:00
|
|
|
objects="-Wl,--whole-archive \
|
2019-01-17 08:10:04 +08:00
|
|
|
${KBUILD_VMLINUX_OBJS} \
|
2018-02-10 22:25:03 +08:00
|
|
|
-Wl,--no-whole-archive \
|
|
|
|
-Wl,--start-group \
|
|
|
|
${KBUILD_VMLINUX_LIBS} \
|
|
|
|
-Wl,--end-group \
|
2019-09-06 01:59:38 +08:00
|
|
|
${@}"
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
|
2019-09-06 01:59:38 +08:00
|
|
|
${CC} ${CFLAGS_vmlinux} \
|
2020-03-04 10:18:34 +08:00
|
|
|
${strip_debug} \
|
2019-09-06 01:59:38 +08:00
|
|
|
-o ${output} \
|
2018-02-10 22:25:03 +08:00
|
|
|
-Wl,-T,${lds} \
|
|
|
|
${objects} \
|
kbuild: allow architectures to use thin archives instead of ld -r
ld -r is an incremental link used to create built-in.o files in build
subdirectories. It produces relocatable object files containing all
its input files, and these are are then pulled together and relocated
in the final link. Aside from the bloat, this constrains the final
link relocations, which has bitten large powerpc builds with
unresolvable relocations in the final link.
Alan Modra has recommended the kernel use thin archives for linking.
This is an alternative and means that the linker has more information
available to it when it links the kernel.
This patch enables a config option architectures can select, which
causes all built-in.o files to be built as thin archives. built-in.o
files in subdirectories do not get symbol table or index attached,
which improves speed and size. The final link pass creates a
built-in.o archive in the root output directory which includes the
symbol table and index. The linker then uses takes this file to link.
The --whole-archive linker option is required, because the linker now
has visibility to every individual object file, and it will otherwise
just completely avoid including those without external references
(consider a file with EXPORT_SYMBOL or initcall or hardware exceptions
as its only entry points). The traditional built works "by luck" as
built-in.o files are large enough that they're going to get external
references. However this optimisation is unpredictable for the kernel
(due to above external references), ineffective at culling unused, and
costly because the .o files have to be searched for references.
Superior alternatives for link-time culling should be used instead.
Build characteristics for inclink vs thinarc, on a small powerpc64le
pseries VM with a modest .config:
inclink thinarc
sizes
vmlinux 15 618 680 15 625 028
sum of all built-in.o 56 091 808 1 054 334
sum excluding root built-in.o 151 430
find -name built-in.o | xargs rm ; time make vmlinux
real 22.772s 21.143s
user 13.280s 13.430s
sys 4.310s 2.750s
- Final kernel pulled in only about 6K more, which shows how
ineffective the object file culling is.
- Build performance looks improved due to less pagecache activity.
On IO constrained systems it could be a bigger win.
- Build size saving is significant.
Side note, the toochain understands archives, so there's some tricks,
$ ar t built-in.o # list all files you linked with
$ size built-in.o # and their sizes
$ objdump -d built-in.o # disassembly (unrelocated) with filenames
Implementation by sfr, minor tweaks by npiggin.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
2016-08-24 20:29:19 +08:00
|
|
|
-lutil -lrt -lpthread
|
2012-05-05 16:18:41 +08:00
|
|
|
rm -f linux
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
# generate .BTF typeinfo from DWARF debuginfo
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
# ${1} - vmlinux image
|
|
|
|
# ${2} - file to dump raw BTF data into
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
gen_btf()
|
|
|
|
{
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
local pahole_ver
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
|
2019-05-06 08:10:33 +08:00
|
|
|
if ! [ -x "$(command -v ${PAHOLE})" ]; then
|
2020-01-22 08:01:10 +08:00
|
|
|
echo >&2 "BTF: ${1}: pahole (${PAHOLE}) is not available"
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
return 1
|
2019-05-06 08:10:33 +08:00
|
|
|
fi
|
|
|
|
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
pahole_ver=$(${PAHOLE} --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/')
|
2020-06-08 17:42:57 +08:00
|
|
|
if [ "${pahole_ver}" -lt "116" ]; then
|
|
|
|
echo >&2 "BTF: ${1}: pahole version $(${PAHOLE} --version) is too old, need at least v1.16"
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
return 1
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
fi
|
|
|
|
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
vmlinux_link ${1}
|
2020-03-04 10:18:34 +08:00
|
|
|
|
|
|
|
info "BTF" ${2}
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
LLVM_OBJCOPY=${OBJCOPY} ${PAHOLE} -J ${1}
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
|
2020-03-19 06:27:46 +08:00
|
|
|
# Create ${2} which contains just .BTF section but no symbols. Add
|
|
|
|
# SHF_ALLOC because .BTF will be part of the vmlinux image. --strip-all
|
|
|
|
# deletes all symbols including __start_BTF and __stop_BTF, which will
|
|
|
|
# be redefined in the linker script. Add 2>/dev/null to suppress GNU
|
|
|
|
# objcopy warnings: "empty loadable segment detected at ..."
|
|
|
|
${OBJCOPY} --only-section=.BTF --set-section-flags .BTF=alloc,readonly \
|
|
|
|
--strip-all ${1} ${2} 2>/dev/null
|
|
|
|
# Change e_type to ET_REL so that it can be used to link final vmlinux.
|
|
|
|
# Unlike GNU ld, lld does not allow an ET_EXEC input.
|
|
|
|
printf '\1' | dd of=${2} conv=notrunc bs=1 seek=16 status=none
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
}
|
2012-05-05 16:18:41 +08:00
|
|
|
|
2020-09-24 23:45:46 +08:00
|
|
|
# Create ${2} .S file with all symbols from the ${1} object file
|
2012-05-05 16:18:41 +08:00
|
|
|
kallsyms()
|
|
|
|
{
|
|
|
|
local kallsymopt;
|
|
|
|
|
|
|
|
if [ -n "${CONFIG_KALLSYMS_ALL}" ]; then
|
2012-09-07 05:11:25 +08:00
|
|
|
kallsymopt="${kallsymopt} --all-symbols"
|
2012-05-05 16:18:41 +08:00
|
|
|
fi
|
|
|
|
|
2016-03-16 05:58:12 +08:00
|
|
|
if [ -n "${CONFIG_KALLSYMS_ABSOLUTE_PERCPU}" ]; then
|
2014-03-17 11:35:46 +08:00
|
|
|
kallsymopt="${kallsymopt} --absolute-percpu"
|
|
|
|
fi
|
|
|
|
|
kallsyms: add support for relative offsets in kallsyms address table
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-16 05:58:19 +08:00
|
|
|
if [ -n "${CONFIG_KALLSYMS_BASE_RELATIVE}" ]; then
|
|
|
|
kallsymopt="${kallsymopt} --base-relative"
|
|
|
|
fi
|
|
|
|
|
2020-09-24 23:45:46 +08:00
|
|
|
info KSYMS ${2}
|
|
|
|
${NM} -n ${1} | scripts/kallsyms ${kallsymopt} > ${2}
|
2012-05-05 16:18:41 +08:00
|
|
|
}
|
|
|
|
|
2019-08-13 23:15:32 +08:00
|
|
|
# Perform one step in kallsyms generation, including temporary linking of
|
|
|
|
# vmlinux.
|
|
|
|
kallsyms_step()
|
|
|
|
{
|
|
|
|
kallsymso_prev=${kallsymso}
|
2020-03-04 10:18:34 +08:00
|
|
|
kallsyms_vmlinux=.tmp_vmlinux.kallsyms${1}
|
|
|
|
kallsymso=${kallsyms_vmlinux}.o
|
2020-09-24 23:45:46 +08:00
|
|
|
kallsyms_S=${kallsyms_vmlinux}.S
|
2019-08-13 23:15:32 +08:00
|
|
|
|
2019-09-20 23:36:47 +08:00
|
|
|
vmlinux_link ${kallsyms_vmlinux} "${kallsymso_prev}" ${btf_vmlinux_bin_o}
|
2020-09-24 23:45:46 +08:00
|
|
|
kallsyms ${kallsyms_vmlinux} ${kallsyms_S}
|
|
|
|
|
|
|
|
info AS ${kallsyms_S}
|
|
|
|
${CC} ${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS} \
|
|
|
|
${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} \
|
|
|
|
-c -o ${kallsymso} ${kallsyms_S}
|
2019-08-13 23:15:32 +08:00
|
|
|
}
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Create map file with all symbols from ${1}
|
|
|
|
# See mksymap for additional details
|
|
|
|
mksysmap()
|
|
|
|
{
|
|
|
|
${CONFIG_SHELL} "${srctree}/scripts/mksysmap" ${1} ${2}
|
|
|
|
}
|
|
|
|
|
2019-12-04 08:46:31 +08:00
|
|
|
sorttable()
|
2012-05-29 01:32:28 +08:00
|
|
|
{
|
2019-12-04 08:46:31 +08:00
|
|
|
${objtree}/scripts/sorttable ${1}
|
2012-05-29 01:32:28 +08:00
|
|
|
}
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Delete output files in case of error
|
|
|
|
cleanup()
|
|
|
|
{
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
rm -f .btf.*
|
2012-05-05 16:18:41 +08:00
|
|
|
rm -f .tmp_System.map
|
2020-12-12 02:46:24 +08:00
|
|
|
rm -f .tmp_initcalls.lds
|
2020-12-12 02:46:20 +08:00
|
|
|
rm -f .tmp_symversions.lds
|
2012-05-05 16:18:41 +08:00
|
|
|
rm -f .tmp_vmlinux*
|
|
|
|
rm -f System.map
|
|
|
|
rm -f vmlinux
|
|
|
|
rm -f vmlinux.o
|
|
|
|
}
|
|
|
|
|
2015-05-07 08:36:04 +08:00
|
|
|
on_exit()
|
|
|
|
{
|
|
|
|
if [ $? -ne 0 ]; then
|
|
|
|
cleanup
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
trap on_exit EXIT
|
|
|
|
|
|
|
|
on_signals()
|
|
|
|
{
|
|
|
|
exit 1
|
|
|
|
}
|
|
|
|
trap on_signals HUP INT QUIT TERM
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
# Use "make V=1" to debug this script
|
|
|
|
case "${KBUILD_VERBOSE}" in
|
|
|
|
*1*)
|
|
|
|
set -x
|
|
|
|
;;
|
|
|
|
esac
|
|
|
|
|
|
|
|
if [ "$1" = "clean" ]; then
|
|
|
|
cleanup
|
|
|
|
exit 0
|
|
|
|
fi
|
|
|
|
|
|
|
|
# We need access to CONFIG_ symbols
|
2019-03-08 13:49:10 +08:00
|
|
|
. include/config/auto.conf
|
2012-05-05 16:18:41 +08:00
|
|
|
|
|
|
|
# Update version
|
|
|
|
info GEN .version
|
2017-09-22 13:31:13 +08:00
|
|
|
if [ -r .version ]; then
|
|
|
|
VERSION=$(expr 0$(cat .version) + 1)
|
|
|
|
echo $VERSION > .version
|
2012-05-05 16:18:41 +08:00
|
|
|
else
|
2017-09-22 13:31:13 +08:00
|
|
|
rm -f .version
|
|
|
|
echo 1 > .version
|
2012-05-05 16:18:41 +08:00
|
|
|
fi;
|
|
|
|
|
|
|
|
# final build of init/
|
2020-02-11 04:06:34 +08:00
|
|
|
${MAKE} -f "${srctree}/scripts/Makefile.build" obj=init need-builtin=1
|
2012-05-05 16:18:41 +08:00
|
|
|
|
2016-11-24 00:41:43 +08:00
|
|
|
#link vmlinux.o
|
|
|
|
modpost_link vmlinux.o
|
2020-03-18 20:33:54 +08:00
|
|
|
objtool_link vmlinux.o
|
2016-11-24 00:41:43 +08:00
|
|
|
|
|
|
|
# modpost vmlinux.o to check for section mismatches
|
2019-07-30 23:59:02 +08:00
|
|
|
${MAKE} -f "${srctree}/scripts/Makefile.modpost" MODPOST_VMLINUX=1
|
2016-11-24 00:41:43 +08:00
|
|
|
|
2019-04-30 00:11:14 +08:00
|
|
|
info MODINFO modules.builtin.modinfo
|
|
|
|
${OBJCOPY} -j .modinfo -O binary vmlinux.o modules.builtin.modinfo
|
kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf
Commit bc081dd6e9f6 ("kbuild: generate modules.builtin") added
infrastructure to generate modules.builtin, the list of all
builtin modules.
Basically, it works like this:
- Kconfig generates include/config/tristate.conf, the list of
tristate CONFIG options with a value in a capital letter.
- scripts/Makefile.modbuiltin makes Kbuild descend into
directories to collect the information of builtin modules.
I am not a big fan of it because Kbuild ends up with traversing
the source tree twice.
I am not sure how perfectly it should work, but this approach cannot
avoid false positives; even if the relevant CONFIG option is tristate,
some Makefiles forces obj-m to obj-y.
Some examples are:
arch/powerpc/platforms/powermac/Makefile:
obj-$(CONFIG_NVRAM:m=y) += nvram.o
net/ipv6/Makefile:
obj-$(subst m,y,$(CONFIG_IPV6)) += inet6_hashtables.o
net/netlabel/Makefile:
obj-$(subst m,y,$(CONFIG_IPV6)) += netlabel_calipso.o
Nobody has complained about (or noticed) it, so it is probably fine to
have false positives in modules.builtin.
This commit simplifies the implementation. Let's exploit the fact
that every module has MODULE_LICENSE(). (modpost shows a warning if
MODULE_LICENSE is missing. If so, 0-day bot would already have blocked
such a module.)
I added MODULE_FILE to <linux/module.h>. When the code is being compiled
as builtin, it will be filled with the file path of the module, and
collected into modules.builtin.info. Then, scripts/link-vmlinux.sh
extracts the list of builtin modules out of it.
This new approach fixes the false-positives above, but adds another
type of false-positives; non-modular code may have MODULE_LICENSE()
by mistake. This is not a big deal, it is just the code is always
orphan. We can clean it up if we like. You can see cleanup examples by:
$ git log --grep='make.* explicitly non-modular'
To sum up, this commits deletes lots of code, but still produces almost
equivalent results. Please note it does not increase the vmlinux size at
all. As you can see in include/asm-generic/vmlinux.lds.h, the .modinfo
section is discarded in the link stage.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2019-12-19 16:33:29 +08:00
|
|
|
info GEN modules.builtin
|
|
|
|
# The second line aids cases where multiple modules share the same object.
|
|
|
|
tr '\0' '\n' < modules.builtin.modinfo | sed -n 's/^[[:alnum:]:_]*\.file=//p' |
|
|
|
|
tr ' ' '\n' | uniq | sed -e 's:^:kernel/:' -e 's/$/.ko/' > modules.builtin
|
2019-04-30 00:11:14 +08:00
|
|
|
|
2019-08-14 02:54:42 +08:00
|
|
|
btf_vmlinux_bin_o=""
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
if [ -n "${CONFIG_DEBUG_INFO_BTF}" ]; then
|
2020-03-04 10:18:34 +08:00
|
|
|
btf_vmlinux_bin_o=.btf.vmlinux.bin.o
|
|
|
|
if ! gen_btf .tmp_vmlinux.btf $btf_vmlinux_bin_o ; then
|
2019-11-28 00:14:10 +08:00
|
|
|
echo >&2 "Failed to generate BTF for vmlinux"
|
|
|
|
echo >&2 "Try to disable CONFIG_DEBUG_INFO_BTF"
|
|
|
|
exit 1
|
btf: expose BTF info through sysfs
Make .BTF section allocated and expose its contents through sysfs.
/sys/kernel/btf directory is created to contain all the BTFs present
inside kernel. Currently there is only kernel's main BTF, represented as
/sys/kernel/btf/kernel file. Once kernel modules' BTFs are supported,
each module will expose its BTF as /sys/kernel/btf/<module-name> file.
Current approach relies on a few pieces coming together:
1. pahole is used to take almost final vmlinux image (modulo .BTF and
kallsyms) and generate .BTF section by converting DWARF info into
BTF. This section is not allocated and not mapped to any segment,
though, so is not yet accessible from inside kernel at runtime.
2. objcopy dumps .BTF contents into binary file and subsequently
convert binary file into linkable object file with automatically
generated symbols _binary__btf_kernel_bin_start and
_binary__btf_kernel_bin_end, pointing to start and end, respectively,
of BTF raw data.
3. final vmlinux image is generated by linking this object file (and
kallsyms, if necessary). sysfs_btf.c then creates
/sys/kernel/btf/kernel file and exposes embedded BTF contents through
it. This allows, e.g., libbpf and bpftool access BTF info at
well-known location, without resorting to searching for vmlinux image
on disk (location of which is not standardized and vmlinux image
might not be even available in some scenarios, e.g., inside qemu
during testing).
Alternative approach using .incbin assembler directive to embed BTF
contents directly was attempted but didn't work, because sysfs_proc.o is
not re-compiled during link-vmlinux.sh stage. This is required, though,
to update embedded BTF data (initially empty data is embedded, then
pahole generates BTF info and we need to regenerate sysfs_btf.o with
updated contents, but it's too late at that point).
If BTF couldn't be generated due to missing or too old pahole,
sysfs_btf.c handles that gracefully by detecting that
_binary__btf_kernel_bin_start (weak symbol) is 0 and not creating
/sys/kernel/btf at all.
v2->v3:
- added Documentation/ABI/testing/sysfs-kernel-btf (Greg K-H);
- created proper kobject (btf_kobj) for btf directory (Greg K-H);
- undo v2 change of reusing vmlinux, as it causes extra kallsyms pass
due to initially missing __binary__btf_kernel_bin_{start/end} symbols;
v1->v2:
- allow kallsyms stage to re-use vmlinux generated by gen_btf();
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-13 02:39:47 +08:00
|
|
|
fi
|
|
|
|
fi
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
kallsymso=""
|
2019-08-13 23:15:32 +08:00
|
|
|
kallsymso_prev=""
|
2012-05-05 16:18:41 +08:00
|
|
|
kallsyms_vmlinux=""
|
|
|
|
if [ -n "${CONFIG_KALLSYMS}" ]; then
|
|
|
|
|
|
|
|
# kallsyms support
|
|
|
|
# Generate section listing all symbols and add it into vmlinux
|
|
|
|
# It's a three step process:
|
|
|
|
# 1) Link .tmp_vmlinux1 so it has all symbols and sections,
|
|
|
|
# but __kallsyms is empty.
|
|
|
|
# Running kallsyms on that gives us .tmp_kallsyms1.o with
|
|
|
|
# the right size
|
|
|
|
# 2) Link .tmp_vmlinux2 so it now has a __kallsyms section of
|
|
|
|
# the right size, but due to the added section, some
|
|
|
|
# addresses have shifted.
|
|
|
|
# From here, we generate a correct .tmp_kallsyms2.o
|
2016-11-24 00:41:37 +08:00
|
|
|
# 3) That link may have expanded the kernel image enough that
|
|
|
|
# more linker branch stubs / trampolines had to be added, which
|
|
|
|
# introduces new names, which further expands kallsyms. Do another
|
|
|
|
# pass if that is the case. In theory it's possible this results
|
|
|
|
# in even more stubs, but unlikely.
|
|
|
|
# KALLSYMS_EXTRA_PASS=1 may also used to debug or work around
|
|
|
|
# other bugs.
|
|
|
|
# 4) The correct ${kallsymso} is linked into the final vmlinux.
|
2012-05-05 16:18:41 +08:00
|
|
|
#
|
|
|
|
# a) Verify that the System.map from vmlinux matches the map from
|
|
|
|
# ${kallsymso}.
|
|
|
|
|
2019-08-13 23:15:32 +08:00
|
|
|
kallsyms_step 1
|
|
|
|
kallsyms_step 2
|
2012-05-05 16:18:41 +08:00
|
|
|
|
2016-11-24 00:41:37 +08:00
|
|
|
# step 3
|
2019-08-13 23:15:32 +08:00
|
|
|
size1=$(${CONFIG_SHELL} "${srctree}/scripts/file-size.sh" ${kallsymso_prev})
|
|
|
|
size2=$(${CONFIG_SHELL} "${srctree}/scripts/file-size.sh" ${kallsymso})
|
2016-11-24 00:41:37 +08:00
|
|
|
|
|
|
|
if [ $size1 -ne $size2 ] || [ -n "${KALLSYMS_EXTRA_PASS}" ]; then
|
2019-08-13 23:15:32 +08:00
|
|
|
kallsyms_step 3
|
2012-05-05 16:18:41 +08:00
|
|
|
fi
|
|
|
|
fi
|
|
|
|
|
2019-09-20 23:36:47 +08:00
|
|
|
vmlinux_link vmlinux "${kallsymso}" ${btf_vmlinux_bin_o}
|
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-03 00:49:50 +08:00
|
|
|
|
2020-07-12 05:53:24 +08:00
|
|
|
# fill in BTF IDs
|
2020-09-24 02:57:34 +08:00
|
|
|
if [ -n "${CONFIG_DEBUG_INFO_BTF}" -a -n "${CONFIG_BPF}" ]; then
|
|
|
|
info BTFIDS vmlinux
|
|
|
|
${RESOLVE_BTFIDS} vmlinux
|
2020-07-12 05:53:24 +08:00
|
|
|
fi
|
|
|
|
|
2019-12-04 08:46:31 +08:00
|
|
|
if [ -n "${CONFIG_BUILDTIME_TABLE_SORT}" ]; then
|
|
|
|
info SORTTAB vmlinux
|
2019-12-04 08:46:33 +08:00
|
|
|
if ! sorttable vmlinux; then
|
|
|
|
echo >&2 Failed to sort kernel tables
|
|
|
|
exit 1
|
|
|
|
fi
|
2012-05-29 01:32:28 +08:00
|
|
|
fi
|
|
|
|
|
2012-05-05 16:18:41 +08:00
|
|
|
info SYSMAP System.map
|
|
|
|
mksysmap vmlinux System.map
|
|
|
|
|
|
|
|
# step a (see comment above)
|
|
|
|
if [ -n "${CONFIG_KALLSYMS}" ]; then
|
|
|
|
mksysmap ${kallsyms_vmlinux} .tmp_System.map
|
|
|
|
|
|
|
|
if ! cmp -s System.map .tmp_System.map; then
|
2012-07-08 05:04:40 +08:00
|
|
|
echo >&2 Inconsistent kallsyms data
|
2012-08-10 17:55:11 +08:00
|
|
|
echo >&2 Try "make KALLSYMS_EXTRA_PASS=1" as a workaround
|
2012-05-05 16:18:41 +08:00
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
fi
|