Networking fixes for 5.18-rc1 and rethook patches.
Features: - kprobes: rethook: x86: replace kretprobe trampoline with rethook Current release - regressions: - sfc: avoid null-deref on systems without NUMA awareness in the new queue sizing code Current release - new code bugs: - vxlan: do not feed vxlan_vnifilter_dump_dev with non-vxlan devices - eth: lan966x: fix null-deref on PHY pointer in timestamp ioctl when interface is down Previous releases - always broken: - openvswitch: correct neighbor discovery target mask field in the flow dump - wireguard: ignore v6 endpoints when ipv6 is disabled and fix a leak - rxrpc: fix call timer start racing with call destruction - rxrpc: fix null-deref when security type is rxrpc_no_security - can: fix UAF bugs around echo skbs in multiple drivers Misc: - docs: move netdev-FAQ to the "process" section of the documentation Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmJF3S0ACgkQMUZtbf5S IruIvA/+NZx+c+fBBrbjOh63avRL7kYIqIDREf+v6lh4ZXmbrp22xalcjIdxgWeK vAiYfYmzZblWAGkilcvPG3blCBc+9b+YE+pPJXFe60Huv3eYpjKfgTKwQOg/lIeM 8MfPP7eBwcJ/ltSTRtySRl9LYgyVcouP9rAVJavFVYrvuDYunwhfChswVfGCYon8 42O4nRwrtkTE1MjHD8HS3YxvwGlo+iIyhsxgG/gWx8F2xeIG22H6adzjDXcCQph8 air/awrJ4enYkVMRokGNfNppK9Z3vjJDX5xha3CREpvXNPe0F24cAE/L8XqyH7+r /bXP5y9VC9mmEO7x4Le3VmDhOJGbCOtR89gTlevftDRdSIrbNHffZhbPW48tR7o8 NJFlhiSJb4HEMN0q7BmxnWaKlbZUlvLEXLuU5ytZE/G7i+nETULlunfZrCD4eNYH gBGYhiob2I/XotJA9QzG/RDyaFwDaC/VARsyv37PSeBAl/yrEGAeP7DsKkKX/ayg LM9ItveqHXK30J0xr3QJA8s49EkIYejjYR3l0hQ9esf9QvGK99dE/fo44Apf3C3A Lz6XpnRc5Xd7tZ9Aopwb3FqOH6WR9Hq9Qlbk0qifsL/2sRbatpuZbbDK6L3CR3Ir WFNcOoNbbqv85kCKFXFjj0jdpoNa9Yej8XFkMkVSkM3sHImYmYQ= =5Bvy -----END PGP SIGNATURE----- Merge tag 'net-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull more networking updates from Jakub Kicinski: "Networking fixes and rethook patches. Features: - kprobes: rethook: x86: replace kretprobe trampoline with rethook Current release - regressions: - sfc: avoid null-deref on systems without NUMA awareness in the new queue sizing code Current release - new code bugs: - vxlan: do not feed vxlan_vnifilter_dump_dev with non-vxlan devices - eth: lan966x: fix null-deref on PHY pointer in timestamp ioctl when interface is down Previous releases - always broken: - openvswitch: correct neighbor discovery target mask field in the flow dump - wireguard: ignore v6 endpoints when ipv6 is disabled and fix a leak - rxrpc: fix call timer start racing with call destruction - rxrpc: fix null-deref when security type is rxrpc_no_security - can: fix UAF bugs around echo skbs in multiple drivers Misc: - docs: move netdev-FAQ to the 'process' section of the documentation" * tag 'net-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (57 commits) vxlan: do not feed vxlan_vnifilter_dump_dev with non vxlan devices openvswitch: Add recirc_id to recirc warning rxrpc: fix some null-ptr-deref bugs in server_key.c rxrpc: Fix call timer start racing with call destruction net: hns3: fix software vlan talbe of vlan 0 inconsistent with hardware net: hns3: fix the concurrency between functions reading debugfs docs: netdev: move the netdev-FAQ to the process pages docs: netdev: broaden the new vs old code formatting guidelines docs: netdev: call out the merge window in tag checking docs: netdev: add missing back ticks docs: netdev: make the testing requirement more stringent docs: netdev: add a question about re-posting frequency docs: netdev: rephrase the 'should I update patchwork' question docs: netdev: rephrase the 'Under review' question docs: netdev: shorten the name and mention msgid for patch status docs: netdev: note that RFC postings are allowed any time docs: netdev: turn the net-next closed into a Warning docs: netdev: move the patch marking section up docs: netdev: minor reword docs: netdev: replace references to old archives ...
This commit is contained in:
commit
2975dbdc39
|
@ -658,7 +658,7 @@ when:
|
|||
|
||||
.. Links
|
||||
.. _Documentation/process/: https://www.kernel.org/doc/html/latest/process/
|
||||
.. _netdev-FAQ: ../networking/netdev-FAQ.rst
|
||||
.. _netdev-FAQ: Documentation/process/maintainer-netdev.rst
|
||||
.. _selftests:
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/
|
||||
.. _Documentation/dev-tools/kselftest.rst:
|
||||
|
|
|
@ -7,7 +7,9 @@ This device has following properties:
|
|||
|
||||
Required properties:
|
||||
|
||||
- compatible: Should be qcom,qcs404-ethqos"
|
||||
- compatible: Should be one of:
|
||||
"qcom,qcs404-ethqos"
|
||||
"qcom,sm8150-ethqos"
|
||||
|
||||
- reg: Address and length of the register set for the device
|
||||
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
Linux Networking Documentation
|
||||
==============================
|
||||
|
||||
Refer to :ref:`netdev-FAQ` for a guide on netdev development process specifics.
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
netdev-FAQ
|
||||
af_xdp
|
||||
bareudp
|
||||
batman-adv
|
||||
|
|
|
@ -16,3 +16,4 @@ Contents:
|
|||
:maxdepth: 2
|
||||
|
||||
maintainer-tip
|
||||
maintainer-netdev
|
||||
|
|
|
@ -16,12 +16,10 @@ Note that some subsystems (e.g. wireless drivers) which have a high
|
|||
volume of traffic have their own specific mailing lists.
|
||||
|
||||
The netdev list is managed (like many other Linux mailing lists) through
|
||||
VGER (http://vger.kernel.org/) and archives can be found below:
|
||||
VGER (http://vger.kernel.org/) with archives available at
|
||||
https://lore.kernel.org/netdev/
|
||||
|
||||
- http://marc.info/?l=linux-netdev
|
||||
- http://www.spinics.net/lists/netdev/
|
||||
|
||||
Aside from subsystems like that mentioned above, all network-related
|
||||
Aside from subsystems like those mentioned above, all network-related
|
||||
Linux development (i.e. RFC, review, comments, etc.) takes place on
|
||||
netdev.
|
||||
|
||||
|
@ -37,6 +35,17 @@ for the future release. You can find the trees here:
|
|||
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
|
||||
- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
|
||||
|
||||
How do I indicate which tree (net vs. net-next) my patch should be in?
|
||||
----------------------------------------------------------------------
|
||||
To help maintainers and CI bots you should explicitly mark which tree
|
||||
your patch is targeting. Assuming that you use git, use the prefix
|
||||
flag::
|
||||
|
||||
git format-patch --subject-prefix='PATCH net-next' start..finish
|
||||
|
||||
Use ``net`` instead of ``net-next`` (always lower case) in the above for
|
||||
bug-fix ``net`` content.
|
||||
|
||||
How often do changes from these trees make it to the mainline Linus tree?
|
||||
-------------------------------------------------------------------------
|
||||
To understand this, you need to know a bit of background information on
|
||||
|
@ -61,8 +70,12 @@ relating to vX.Y
|
|||
An announcement indicating when ``net-next`` has been closed is usually
|
||||
sent to netdev, but knowing the above, you can predict that in advance.
|
||||
|
||||
IMPORTANT: Do not send new ``net-next`` content to netdev during the
|
||||
period during which ``net-next`` tree is closed.
|
||||
.. warning::
|
||||
Do not send new ``net-next`` content to netdev during the
|
||||
period during which ``net-next`` tree is closed.
|
||||
|
||||
RFC patches sent for review only are obviously welcome at any time
|
||||
(use ``--subject-prefix='RFC net-next'`` with ``git format-patch``).
|
||||
|
||||
Shortly after the two weeks have passed (and vX.Y-rc1 is released), the
|
||||
tree for ``net-next`` reopens to collect content for the next (vX.Y+1)
|
||||
|
@ -90,41 +103,35 @@ Load the mainline (Linus) page here:
|
|||
|
||||
and note the top of the "tags" section. If it is rc1, it is early in
|
||||
the dev cycle. If it was tagged rc7 a week ago, then a release is
|
||||
probably imminent.
|
||||
probably imminent. If the most recent tag is a final release tag
|
||||
(without an ``-rcN`` suffix) - we are most likely in a merge window
|
||||
and ``net-next`` is closed.
|
||||
|
||||
How do I indicate which tree (net vs. net-next) my patch should be in?
|
||||
----------------------------------------------------------------------
|
||||
Firstly, think whether you have a bug fix or new "next-like" content.
|
||||
Then once decided, assuming that you use git, use the prefix flag, i.e.
|
||||
::
|
||||
|
||||
git format-patch --subject-prefix='PATCH net-next' start..finish
|
||||
|
||||
Use ``net`` instead of ``net-next`` (always lower case) in the above for
|
||||
bug-fix ``net`` content. If you don't use git, then note the only magic
|
||||
in the above is just the subject text of the outgoing e-mail, and you
|
||||
can manually change it yourself with whatever MUA you are comfortable
|
||||
with.
|
||||
|
||||
I sent a patch and I'm wondering what happened to it - how can I tell whether it got merged?
|
||||
--------------------------------------------------------------------------------------------
|
||||
How can I tell the status of a patch I've sent?
|
||||
-----------------------------------------------
|
||||
Start by looking at the main patchworks queue for netdev:
|
||||
|
||||
https://patchwork.kernel.org/project/netdevbpf/list/
|
||||
|
||||
The "State" field will tell you exactly where things are at with your
|
||||
patch.
|
||||
patch. Patches are indexed by the ``Message-ID`` header of the emails
|
||||
which carried them so if you have trouble finding your patch append
|
||||
the value of ``Message-ID`` to the URL above.
|
||||
|
||||
The above only says "Under Review". How can I find out more?
|
||||
-------------------------------------------------------------
|
||||
How long before my patch is accepted?
|
||||
-------------------------------------
|
||||
Generally speaking, the patches get triaged quickly (in less than
|
||||
48h). So be patient. Asking the maintainer for status updates on your
|
||||
48h). But be patient, if your patch is active in patchwork (i.e. it's
|
||||
listed on the project's patch list) the chances it was missed are close to zero.
|
||||
Asking the maintainer for status updates on your
|
||||
patch is a good way to ensure your patch is ignored or pushed to the
|
||||
bottom of the priority list.
|
||||
|
||||
I submitted multiple versions of the patch series. Should I directly update patchwork for the previous versions of these patch series?
|
||||
--------------------------------------------------------------------------------------------------------------------------------------
|
||||
No, please don't interfere with the patch status on patchwork, leave
|
||||
Should I directly update patchwork state of my own patches?
|
||||
-----------------------------------------------------------
|
||||
It may be tempting to help the maintainers and update the state of your
|
||||
own patches when you post a new version or spot a bug. Please do not do that.
|
||||
Interfering with the patch status on patchwork will only cause confusion. Leave
|
||||
it to the maintainer to figure out what is the most recent and current
|
||||
version that should be applied. If there is any doubt, the maintainer
|
||||
will reply and ask what should be done.
|
||||
|
@ -135,6 +142,17 @@ No, please resend the entire patch series and make sure you do number your
|
|||
patches such that it is clear this is the latest and greatest set of patches
|
||||
that can be applied.
|
||||
|
||||
I have received review feedback, when should I post a revised version of the patches?
|
||||
-------------------------------------------------------------------------------------
|
||||
Allow at least 24 hours to pass between postings. This will ensure reviewers
|
||||
from all geographical locations have a chance to chime in. Do not wait
|
||||
too long (weeks) between postings either as it will make it harder for reviewers
|
||||
to recall all the context.
|
||||
|
||||
Make sure you address all the feedback in your new posting. Do not post a new
|
||||
version of the code if the discussion about the previous version is still
|
||||
ongoing, unless directly instructed by a reviewer.
|
||||
|
||||
I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do?
|
||||
----------------------------------------------------------------------------------------------------------------------------------------
|
||||
There is no revert possible, once it is pushed out, it stays like that.
|
||||
|
@ -165,10 +183,10 @@ it is requested that you make it look like this::
|
|||
* another line of text
|
||||
*/
|
||||
|
||||
I am working in existing code that has the former comment style and not the latter. Should I submit new code in the former style or the latter?
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------
|
||||
Make it the latter style, so that eventually all code in the domain
|
||||
of netdev is of this format.
|
||||
I am working in existing code which uses non-standard formatting. Which formatting should I use?
|
||||
------------------------------------------------------------------------------------------------
|
||||
Make your code follow the most recent guidelines, so that eventually all code
|
||||
in the domain of netdev is in the preferred format.
|
||||
|
||||
I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list?
|
||||
---------------------------------------------------------------------------------------------------------------------------
|
||||
|
@ -180,11 +198,15 @@ as possible alternative mechanisms.
|
|||
|
||||
What level of testing is expected before I submit my change?
|
||||
------------------------------------------------------------
|
||||
If your changes are against ``net-next``, the expectation is that you
|
||||
have tested by layering your changes on top of ``net-next``. Ideally
|
||||
you will have done run-time testing specific to your change, but at a
|
||||
minimum, your changes should survive an ``allyesconfig`` and an
|
||||
``allmodconfig`` build without new warnings or failures.
|
||||
At the very minimum your changes must survive an ``allyesconfig`` and an
|
||||
``allmodconfig`` build with ``W=1`` set without new warnings or failures.
|
||||
|
||||
Ideally you will have done run-time testing specific to your change,
|
||||
and the patch series contains a set of kernel selftest for
|
||||
``tools/testing/selftests/net`` or using the KUnit framework.
|
||||
|
||||
You are expected to test your changes on top of the relevant networking
|
||||
tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.
|
||||
|
||||
How do I post corresponding changes to user space components?
|
||||
-------------------------------------------------------------
|
||||
|
@ -198,7 +220,7 @@ or the user space project is not reviewed on netdev include a link
|
|||
to a public repo where user space patches can be seen.
|
||||
|
||||
In case user space tooling lives in a separate repository but is
|
||||
reviewed on netdev (e.g. patches to `iproute2` tools) kernel and
|
||||
reviewed on netdev (e.g. patches to ``iproute2`` tools) kernel and
|
||||
user space patches should form separate series (threads) when posted
|
||||
to the mailing list, e.g.::
|
||||
|
||||
|
@ -231,18 +253,18 @@ traffic if we can help it.
|
|||
netdevsim is great, can I extend it for my out-of-tree tests?
|
||||
-------------------------------------------------------------
|
||||
|
||||
No, `netdevsim` is a test vehicle solely for upstream tests.
|
||||
(Please add your tests under tools/testing/selftests/.)
|
||||
No, ``netdevsim`` is a test vehicle solely for upstream tests.
|
||||
(Please add your tests under ``tools/testing/selftests/``.)
|
||||
|
||||
We also give no guarantees that `netdevsim` won't change in the future
|
||||
We also give no guarantees that ``netdevsim`` won't change in the future
|
||||
in a way which would break what would normally be considered uAPI.
|
||||
|
||||
Is netdevsim considered a "user" of an API?
|
||||
-------------------------------------------
|
||||
|
||||
Linux kernel has a long standing rule that no API should be added unless
|
||||
it has a real, in-tree user. Mock-ups and tests based on `netdevsim` are
|
||||
strongly encouraged when adding new APIs, but `netdevsim` in itself
|
||||
it has a real, in-tree user. Mock-ups and tests based on ``netdevsim`` are
|
||||
strongly encouraged when adding new APIs, but ``netdevsim`` in itself
|
||||
is **not** considered a use case/user.
|
||||
|
||||
Any other tips to help ensure my net/net-next patch gets OK'd?
|
|
@ -13653,6 +13653,7 @@ B: mailto:netdev@vger.kernel.org
|
|||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
|
||||
F: Documentation/networking/
|
||||
F: Documentation/process/maintainer-netdev.rst
|
||||
F: include/linux/in.h
|
||||
F: include/linux/net.h
|
||||
F: include/linux/netdevice.h
|
||||
|
|
|
@ -164,7 +164,13 @@ config ARCH_USE_BUILTIN_BSWAP
|
|||
|
||||
config KRETPROBES
|
||||
def_bool y
|
||||
depends on KPROBES && HAVE_KRETPROBES
|
||||
depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK)
|
||||
|
||||
config KRETPROBE_ON_RETHOOK
|
||||
def_bool y
|
||||
depends on HAVE_RETHOOK
|
||||
depends on KRETPROBES
|
||||
select RETHOOK
|
||||
|
||||
config USER_RETURN_NOTIFIER
|
||||
bool
|
||||
|
|
|
@ -224,6 +224,7 @@ config X86
|
|||
select HAVE_KPROBES_ON_FTRACE
|
||||
select HAVE_FUNCTION_ERROR_INJECTION
|
||||
select HAVE_KRETPROBES
|
||||
select HAVE_RETHOOK
|
||||
select HAVE_KVM
|
||||
select HAVE_LIVEPATCH if X86_64
|
||||
select HAVE_MIXED_BREAKPOINTS_REGS
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
#include <linux/sched.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/kprobes.h>
|
||||
#include <linux/rethook.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/stacktrace.h>
|
||||
|
||||
|
@ -16,7 +16,7 @@ struct unwind_state {
|
|||
unsigned long stack_mask;
|
||||
struct task_struct *task;
|
||||
int graph_idx;
|
||||
#ifdef CONFIG_KRETPROBES
|
||||
#if defined(CONFIG_RETHOOK)
|
||||
struct llist_node *kr_cur;
|
||||
#endif
|
||||
bool error;
|
||||
|
@ -104,19 +104,18 @@ void unwind_module_init(struct module *mod, void *orc_ip, size_t orc_ip_size,
|
|||
#endif
|
||||
|
||||
static inline
|
||||
unsigned long unwind_recover_kretprobe(struct unwind_state *state,
|
||||
unsigned long addr, unsigned long *addr_p)
|
||||
unsigned long unwind_recover_rethook(struct unwind_state *state,
|
||||
unsigned long addr, unsigned long *addr_p)
|
||||
{
|
||||
#ifdef CONFIG_KRETPROBES
|
||||
return is_kretprobe_trampoline(addr) ?
|
||||
kretprobe_find_ret_addr(state->task, addr_p, &state->kr_cur) :
|
||||
addr;
|
||||
#else
|
||||
return addr;
|
||||
#ifdef CONFIG_RETHOOK
|
||||
if (is_rethook_trampoline(addr))
|
||||
return rethook_find_ret_addr(state->task, (unsigned long)addr_p,
|
||||
&state->kr_cur);
|
||||
#endif
|
||||
return addr;
|
||||
}
|
||||
|
||||
/* Recover the return address modified by kretprobe and ftrace_graph. */
|
||||
/* Recover the return address modified by rethook and ftrace_graph. */
|
||||
static inline
|
||||
unsigned long unwind_recover_ret_addr(struct unwind_state *state,
|
||||
unsigned long addr, unsigned long *addr_p)
|
||||
|
@ -125,7 +124,7 @@ unsigned long unwind_recover_ret_addr(struct unwind_state *state,
|
|||
|
||||
ret = ftrace_graph_ret_addr(state->task, &state->graph_idx,
|
||||
addr, addr_p);
|
||||
return unwind_recover_kretprobe(state, ret, addr_p);
|
||||
return unwind_recover_rethook(state, ret, addr_p);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -103,6 +103,7 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
|
|||
obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
|
||||
obj-$(CONFIG_X86_TSC) += trace_clock.o
|
||||
obj-$(CONFIG_TRACING) += trace.o
|
||||
obj-$(CONFIG_RETHOOK) += rethook.o
|
||||
obj-$(CONFIG_CRASH_CORE) += crash_core_$(BITS).o
|
||||
obj-$(CONFIG_KEXEC_CORE) += machine_kexec_$(BITS).o
|
||||
obj-$(CONFIG_KEXEC_CORE) += relocate_kernel_$(BITS).o crash.o
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
|
||||
#include <asm/asm.h>
|
||||
#include <asm/frame.h>
|
||||
#include <asm/insn.h>
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
|
||||
|
|
|
@ -811,18 +811,6 @@ set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
|
|||
= (regs->flags & X86_EFLAGS_IF);
|
||||
}
|
||||
|
||||
void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
|
||||
{
|
||||
unsigned long *sara = stack_addr(regs);
|
||||
|
||||
ri->ret_addr = (kprobe_opcode_t *) *sara;
|
||||
ri->fp = sara;
|
||||
|
||||
/* Replace the return addr with trampoline addr */
|
||||
*sara = (unsigned long) &__kretprobe_trampoline;
|
||||
}
|
||||
NOKPROBE_SYMBOL(arch_prepare_kretprobe);
|
||||
|
||||
static void kprobe_post_process(struct kprobe *cur, struct pt_regs *regs,
|
||||
struct kprobe_ctlblk *kcb)
|
||||
{
|
||||
|
@ -1023,101 +1011,6 @@ int kprobe_int3_handler(struct pt_regs *regs)
|
|||
}
|
||||
NOKPROBE_SYMBOL(kprobe_int3_handler);
|
||||
|
||||
/*
|
||||
* When a retprobed function returns, this code saves registers and
|
||||
* calls trampoline_handler() runs, which calls the kretprobe's handler.
|
||||
*/
|
||||
asm(
|
||||
".text\n"
|
||||
".global __kretprobe_trampoline\n"
|
||||
".type __kretprobe_trampoline, @function\n"
|
||||
"__kretprobe_trampoline:\n"
|
||||
#ifdef CONFIG_X86_64
|
||||
ANNOTATE_NOENDBR
|
||||
/* Push a fake return address to tell the unwinder it's a kretprobe. */
|
||||
" pushq $__kretprobe_trampoline\n"
|
||||
UNWIND_HINT_FUNC
|
||||
/* Save the 'sp - 8', this will be fixed later. */
|
||||
" pushq %rsp\n"
|
||||
" pushfq\n"
|
||||
SAVE_REGS_STRING
|
||||
" movq %rsp, %rdi\n"
|
||||
" call trampoline_handler\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* In trampoline_handler(), 'regs->flags' is copied to 'regs->sp'. */
|
||||
" addq $8, %rsp\n"
|
||||
" popfq\n"
|
||||
#else
|
||||
/* Push a fake return address to tell the unwinder it's a kretprobe. */
|
||||
" pushl $__kretprobe_trampoline\n"
|
||||
UNWIND_HINT_FUNC
|
||||
/* Save the 'sp - 4', this will be fixed later. */
|
||||
" pushl %esp\n"
|
||||
" pushfl\n"
|
||||
SAVE_REGS_STRING
|
||||
" movl %esp, %eax\n"
|
||||
" call trampoline_handler\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* In trampoline_handler(), 'regs->flags' is copied to 'regs->sp'. */
|
||||
" addl $4, %esp\n"
|
||||
" popfl\n"
|
||||
#endif
|
||||
ASM_RET
|
||||
".size __kretprobe_trampoline, .-__kretprobe_trampoline\n"
|
||||
);
|
||||
NOKPROBE_SYMBOL(__kretprobe_trampoline);
|
||||
/*
|
||||
* __kretprobe_trampoline() skips updating frame pointer. The frame pointer
|
||||
* saved in trampoline_handler() points to the real caller function's
|
||||
* frame pointer. Thus the __kretprobe_trampoline() doesn't have a
|
||||
* standard stack frame with CONFIG_FRAME_POINTER=y.
|
||||
* Let's mark it non-standard function. Anyway, FP unwinder can correctly
|
||||
* unwind without the hint.
|
||||
*/
|
||||
STACK_FRAME_NON_STANDARD_FP(__kretprobe_trampoline);
|
||||
|
||||
/* This is called from kretprobe_trampoline_handler(). */
|
||||
void arch_kretprobe_fixup_return(struct pt_regs *regs,
|
||||
kprobe_opcode_t *correct_ret_addr)
|
||||
{
|
||||
unsigned long *frame_pointer = ®s->sp + 1;
|
||||
|
||||
/* Replace fake return address with real one. */
|
||||
*frame_pointer = (unsigned long)correct_ret_addr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Called from __kretprobe_trampoline
|
||||
*/
|
||||
__used __visible void trampoline_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long *frame_pointer;
|
||||
|
||||
/* fixup registers */
|
||||
regs->cs = __KERNEL_CS;
|
||||
#ifdef CONFIG_X86_32
|
||||
regs->gs = 0;
|
||||
#endif
|
||||
regs->ip = (unsigned long)&__kretprobe_trampoline;
|
||||
regs->orig_ax = ~0UL;
|
||||
regs->sp += sizeof(long);
|
||||
frame_pointer = ®s->sp + 1;
|
||||
|
||||
/*
|
||||
* The return address at 'frame_pointer' is recovered by the
|
||||
* arch_kretprobe_fixup_return() which called from the
|
||||
* kretprobe_trampoline_handler().
|
||||
*/
|
||||
kretprobe_trampoline_handler(regs, frame_pointer);
|
||||
|
||||
/*
|
||||
* Copy FLAGS to 'pt_regs::sp' so that __kretprobe_trapmoline()
|
||||
* can do RET right after POPF.
|
||||
*/
|
||||
regs->sp = regs->flags;
|
||||
}
|
||||
NOKPROBE_SYMBOL(trampoline_handler);
|
||||
|
||||
int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
|
||||
{
|
||||
struct kprobe *cur = kprobe_running();
|
||||
|
|
|
@ -106,7 +106,8 @@ asm (
|
|||
".global optprobe_template_entry\n"
|
||||
"optprobe_template_entry:\n"
|
||||
#ifdef CONFIG_X86_64
|
||||
/* We don't bother saving the ss register */
|
||||
" pushq $" __stringify(__KERNEL_DS) "\n"
|
||||
/* Save the 'sp - 8', this will be fixed later. */
|
||||
" pushq %rsp\n"
|
||||
" pushfq\n"
|
||||
".global optprobe_template_clac\n"
|
||||
|
@ -121,14 +122,17 @@ asm (
|
|||
".global optprobe_template_call\n"
|
||||
"optprobe_template_call:\n"
|
||||
ASM_NOP5
|
||||
/* Move flags to rsp */
|
||||
/* Copy 'regs->flags' into 'regs->ss'. */
|
||||
" movq 18*8(%rsp), %rdx\n"
|
||||
" movq %rdx, 19*8(%rsp)\n"
|
||||
" movq %rdx, 20*8(%rsp)\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* Skip flags entry */
|
||||
" addq $8, %rsp\n"
|
||||
/* Skip 'regs->flags' and 'regs->sp'. */
|
||||
" addq $16, %rsp\n"
|
||||
/* And pop flags register from 'regs->ss'. */
|
||||
" popfq\n"
|
||||
#else /* CONFIG_X86_32 */
|
||||
" pushl %ss\n"
|
||||
/* Save the 'sp - 4', this will be fixed later. */
|
||||
" pushl %esp\n"
|
||||
" pushfl\n"
|
||||
".global optprobe_template_clac\n"
|
||||
|
@ -142,12 +146,13 @@ asm (
|
|||
".global optprobe_template_call\n"
|
||||
"optprobe_template_call:\n"
|
||||
ASM_NOP5
|
||||
/* Move flags into esp */
|
||||
/* Copy 'regs->flags' into 'regs->ss'. */
|
||||
" movl 14*4(%esp), %edx\n"
|
||||
" movl %edx, 15*4(%esp)\n"
|
||||
" movl %edx, 16*4(%esp)\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* Skip flags entry */
|
||||
" addl $4, %esp\n"
|
||||
/* Skip 'regs->flags' and 'regs->sp'. */
|
||||
" addl $8, %esp\n"
|
||||
/* And pop flags register from 'regs->ss'. */
|
||||
" popfl\n"
|
||||
#endif
|
||||
".global optprobe_template_end\n"
|
||||
|
@ -179,6 +184,8 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
|
|||
kprobes_inc_nmissed_count(&op->kp);
|
||||
} else {
|
||||
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
|
||||
/* Adjust stack pointer */
|
||||
regs->sp += sizeof(long);
|
||||
/* Save skipped registers */
|
||||
regs->cs = __KERNEL_CS;
|
||||
#ifdef CONFIG_X86_32
|
||||
|
|
|
@ -0,0 +1,127 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
/*
|
||||
* x86 implementation of rethook. Mostly copied from arch/x86/kernel/kprobes/core.c.
|
||||
*/
|
||||
#include <linux/bug.h>
|
||||
#include <linux/rethook.h>
|
||||
#include <linux/kprobes.h>
|
||||
#include <linux/objtool.h>
|
||||
|
||||
#include "kprobes/common.h"
|
||||
|
||||
__visible void arch_rethook_trampoline_callback(struct pt_regs *regs);
|
||||
|
||||
#ifndef ANNOTATE_NOENDBR
|
||||
#define ANNOTATE_NOENDBR
|
||||
#endif
|
||||
|
||||
/*
|
||||
* When a target function returns, this code saves registers and calls
|
||||
* arch_rethook_trampoline_callback(), which calls the rethook handler.
|
||||
*/
|
||||
asm(
|
||||
".text\n"
|
||||
".global arch_rethook_trampoline\n"
|
||||
".type arch_rethook_trampoline, @function\n"
|
||||
"arch_rethook_trampoline:\n"
|
||||
#ifdef CONFIG_X86_64
|
||||
ANNOTATE_NOENDBR /* This is only jumped from ret instruction */
|
||||
/* Push a fake return address to tell the unwinder it's a rethook. */
|
||||
" pushq $arch_rethook_trampoline\n"
|
||||
UNWIND_HINT_FUNC
|
||||
" pushq $" __stringify(__KERNEL_DS) "\n"
|
||||
/* Save the 'sp - 16', this will be fixed later. */
|
||||
" pushq %rsp\n"
|
||||
" pushfq\n"
|
||||
SAVE_REGS_STRING
|
||||
" movq %rsp, %rdi\n"
|
||||
" call arch_rethook_trampoline_callback\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* In the callback function, 'regs->flags' is copied to 'regs->ss'. */
|
||||
" addq $16, %rsp\n"
|
||||
" popfq\n"
|
||||
#else
|
||||
/* Push a fake return address to tell the unwinder it's a rethook. */
|
||||
" pushl $arch_rethook_trampoline\n"
|
||||
UNWIND_HINT_FUNC
|
||||
" pushl %ss\n"
|
||||
/* Save the 'sp - 8', this will be fixed later. */
|
||||
" pushl %esp\n"
|
||||
" pushfl\n"
|
||||
SAVE_REGS_STRING
|
||||
" movl %esp, %eax\n"
|
||||
" call arch_rethook_trampoline_callback\n"
|
||||
RESTORE_REGS_STRING
|
||||
/* In the callback function, 'regs->flags' is copied to 'regs->ss'. */
|
||||
" addl $8, %esp\n"
|
||||
" popfl\n"
|
||||
#endif
|
||||
ASM_RET
|
||||
".size arch_rethook_trampoline, .-arch_rethook_trampoline\n"
|
||||
);
|
||||
NOKPROBE_SYMBOL(arch_rethook_trampoline);
|
||||
|
||||
/*
|
||||
* Called from arch_rethook_trampoline
|
||||
*/
|
||||
__used __visible void arch_rethook_trampoline_callback(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long *frame_pointer;
|
||||
|
||||
/* fixup registers */
|
||||
regs->cs = __KERNEL_CS;
|
||||
#ifdef CONFIG_X86_32
|
||||
regs->gs = 0;
|
||||
#endif
|
||||
regs->ip = (unsigned long)&arch_rethook_trampoline;
|
||||
regs->orig_ax = ~0UL;
|
||||
regs->sp += 2*sizeof(long);
|
||||
frame_pointer = (long *)(regs + 1);
|
||||
|
||||
/*
|
||||
* The return address at 'frame_pointer' is recovered by the
|
||||
* arch_rethook_fixup_return() which called from this
|
||||
* rethook_trampoline_handler().
|
||||
*/
|
||||
rethook_trampoline_handler(regs, (unsigned long)frame_pointer);
|
||||
|
||||
/*
|
||||
* Copy FLAGS to 'pt_regs::ss' so that arch_rethook_trapmoline()
|
||||
* can do RET right after POPF.
|
||||
*/
|
||||
*(unsigned long *)®s->ss = regs->flags;
|
||||
}
|
||||
NOKPROBE_SYMBOL(arch_rethook_trampoline_callback);
|
||||
|
||||
/*
|
||||
* arch_rethook_trampoline() skips updating frame pointer. The frame pointer
|
||||
* saved in arch_rethook_trampoline_callback() points to the real caller
|
||||
* function's frame pointer. Thus the arch_rethook_trampoline() doesn't have
|
||||
* a standard stack frame with CONFIG_FRAME_POINTER=y.
|
||||
* Let's mark it non-standard function. Anyway, FP unwinder can correctly
|
||||
* unwind without the hint.
|
||||
*/
|
||||
STACK_FRAME_NON_STANDARD_FP(arch_rethook_trampoline);
|
||||
|
||||
/* This is called from rethook_trampoline_handler(). */
|
||||
void arch_rethook_fixup_return(struct pt_regs *regs,
|
||||
unsigned long correct_ret_addr)
|
||||
{
|
||||
unsigned long *frame_pointer = (void *)(regs + 1);
|
||||
|
||||
/* Replace fake return address with real one. */
|
||||
*frame_pointer = correct_ret_addr;
|
||||
}
|
||||
NOKPROBE_SYMBOL(arch_rethook_fixup_return);
|
||||
|
||||
void arch_rethook_prepare(struct rethook_node *rh, struct pt_regs *regs, bool mcount)
|
||||
{
|
||||
unsigned long *stack = (unsigned long *)regs->sp;
|
||||
|
||||
rh->ret_addr = stack[0];
|
||||
rh->frame = regs->sp;
|
||||
|
||||
/* Replace the return addr with trampoline addr */
|
||||
stack[0] = (unsigned long) arch_rethook_trampoline;
|
||||
}
|
||||
NOKPROBE_SYMBOL(arch_rethook_prepare);
|
|
@ -550,15 +550,15 @@ bool unwind_next_frame(struct unwind_state *state)
|
|||
}
|
||||
/*
|
||||
* There is a small chance to interrupt at the entry of
|
||||
* __kretprobe_trampoline() where the ORC info doesn't exist.
|
||||
* That point is right after the RET to __kretprobe_trampoline()
|
||||
* arch_rethook_trampoline() where the ORC info doesn't exist.
|
||||
* That point is right after the RET to arch_rethook_trampoline()
|
||||
* which was modified return address.
|
||||
* At that point, the @addr_p of the unwind_recover_kretprobe()
|
||||
* At that point, the @addr_p of the unwind_recover_rethook()
|
||||
* (this has to point the address of the stack entry storing
|
||||
* the modified return address) must be "SP - (a stack entry)"
|
||||
* because SP is incremented by the RET.
|
||||
*/
|
||||
state->ip = unwind_recover_kretprobe(state, state->ip,
|
||||
state->ip = unwind_recover_rethook(state, state->ip,
|
||||
(unsigned long *)(state->sp - sizeof(long)));
|
||||
state->regs = (struct pt_regs *)sp;
|
||||
state->prev_regs = NULL;
|
||||
|
@ -573,7 +573,7 @@ bool unwind_next_frame(struct unwind_state *state)
|
|||
goto err;
|
||||
}
|
||||
/* See UNWIND_HINT_TYPE_REGS case comment. */
|
||||
state->ip = unwind_recover_kretprobe(state, state->ip,
|
||||
state->ip = unwind_recover_rethook(state, state->ip,
|
||||
(unsigned long *)(state->sp - sizeof(long)));
|
||||
|
||||
if (state->full_regs)
|
||||
|
|
|
@ -1637,8 +1637,6 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
|
|||
if (err)
|
||||
goto out_fail;
|
||||
|
||||
can_put_echo_skb(skb, dev, 0, 0);
|
||||
|
||||
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
|
||||
cccr = m_can_read(cdev, M_CAN_CCCR);
|
||||
cccr &= ~CCCR_CMR_MASK;
|
||||
|
@ -1655,6 +1653,9 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
|
|||
m_can_write(cdev, M_CAN_CCCR, cccr);
|
||||
}
|
||||
m_can_write(cdev, M_CAN_TXBTIE, 0x1);
|
||||
|
||||
can_put_echo_skb(skb, dev, 0, 0);
|
||||
|
||||
m_can_write(cdev, M_CAN_TXBAR, 0x1);
|
||||
/* End of xmit function for version 3.0.x */
|
||||
} else {
|
||||
|
|
|
@ -1786,7 +1786,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv, u32 *dev_id,
|
|||
out_kfree_buf_rx:
|
||||
kfree(buf_rx);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
#define MCP251XFD_QUIRK_ACTIVE(quirk) \
|
||||
|
|
|
@ -819,7 +819,6 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
|
|||
|
||||
usb_unanchor_urb(urb);
|
||||
usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
|
||||
dev_kfree_skb(skb);
|
||||
|
||||
atomic_dec(&dev->active_tx_urbs);
|
||||
|
||||
|
|
|
@ -1092,6 +1092,8 @@ static struct gs_can *gs_make_candev(unsigned int channel,
|
|||
dev->data_bt_const.brp_inc = le32_to_cpu(bt_const_extended->dbrp_inc);
|
||||
|
||||
dev->can.data_bittiming_const = &dev->data_bt_const;
|
||||
|
||||
kfree(bt_const_extended);
|
||||
}
|
||||
|
||||
SET_NETDEV_DEV(netdev, &intf->dev);
|
||||
|
|
|
@ -33,10 +33,6 @@
|
|||
#define MCBA_USB_RX_BUFF_SIZE 64
|
||||
#define MCBA_USB_TX_BUFF_SIZE (sizeof(struct mcba_usb_msg))
|
||||
|
||||
/* MCBA endpoint numbers */
|
||||
#define MCBA_USB_EP_IN 1
|
||||
#define MCBA_USB_EP_OUT 1
|
||||
|
||||
/* Microchip command id */
|
||||
#define MBCA_CMD_RECEIVE_MESSAGE 0xE3
|
||||
#define MBCA_CMD_I_AM_ALIVE_FROM_CAN 0xF5
|
||||
|
@ -83,6 +79,8 @@ struct mcba_priv {
|
|||
atomic_t free_ctx_cnt;
|
||||
void *rxbuf[MCBA_MAX_RX_URBS];
|
||||
dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS];
|
||||
int rx_pipe;
|
||||
int tx_pipe;
|
||||
};
|
||||
|
||||
/* CAN frame */
|
||||
|
@ -268,10 +266,8 @@ static netdev_tx_t mcba_usb_xmit(struct mcba_priv *priv,
|
|||
|
||||
memcpy(buf, usb_msg, MCBA_USB_TX_BUFF_SIZE);
|
||||
|
||||
usb_fill_bulk_urb(urb, priv->udev,
|
||||
usb_sndbulkpipe(priv->udev, MCBA_USB_EP_OUT), buf,
|
||||
MCBA_USB_TX_BUFF_SIZE, mcba_usb_write_bulk_callback,
|
||||
ctx);
|
||||
usb_fill_bulk_urb(urb, priv->udev, priv->tx_pipe, buf, MCBA_USB_TX_BUFF_SIZE,
|
||||
mcba_usb_write_bulk_callback, ctx);
|
||||
|
||||
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||
usb_anchor_urb(urb, &priv->tx_submitted);
|
||||
|
@ -364,7 +360,6 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
|
|||
xmit_failed:
|
||||
can_free_echo_skb(priv->netdev, ctx->ndx, NULL);
|
||||
mcba_usb_free_ctx(ctx);
|
||||
dev_kfree_skb(skb);
|
||||
stats->tx_dropped++;
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
|
@ -608,7 +603,7 @@ static void mcba_usb_read_bulk_callback(struct urb *urb)
|
|||
resubmit_urb:
|
||||
|
||||
usb_fill_bulk_urb(urb, priv->udev,
|
||||
usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_OUT),
|
||||
priv->rx_pipe,
|
||||
urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE,
|
||||
mcba_usb_read_bulk_callback, priv);
|
||||
|
||||
|
@ -653,7 +648,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
|
|||
urb->transfer_dma = buf_dma;
|
||||
|
||||
usb_fill_bulk_urb(urb, priv->udev,
|
||||
usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
|
||||
priv->rx_pipe,
|
||||
buf, MCBA_USB_RX_BUFF_SIZE,
|
||||
mcba_usb_read_bulk_callback, priv);
|
||||
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||
|
@ -807,6 +802,13 @@ static int mcba_usb_probe(struct usb_interface *intf,
|
|||
struct mcba_priv *priv;
|
||||
int err;
|
||||
struct usb_device *usbdev = interface_to_usbdev(intf);
|
||||
struct usb_endpoint_descriptor *in, *out;
|
||||
|
||||
err = usb_find_common_endpoints(intf->cur_altsetting, &in, &out, NULL, NULL);
|
||||
if (err) {
|
||||
dev_err(&intf->dev, "Can't find endpoints\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
netdev = alloc_candev(sizeof(struct mcba_priv), MCBA_MAX_TX_URBS);
|
||||
if (!netdev) {
|
||||
|
@ -852,6 +854,9 @@ static int mcba_usb_probe(struct usb_interface *intf,
|
|||
goto cleanup_free_candev;
|
||||
}
|
||||
|
||||
priv->rx_pipe = usb_rcvbulkpipe(priv->udev, in->bEndpointAddress);
|
||||
priv->tx_pipe = usb_sndbulkpipe(priv->udev, out->bEndpointAddress);
|
||||
|
||||
devm_can_led_init(netdev);
|
||||
|
||||
/* Start USB dev only if we have successfully registered CAN device */
|
||||
|
|
|
@ -663,9 +663,20 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
|
|||
atomic_inc(&priv->active_tx_urbs);
|
||||
|
||||
err = usb_submit_urb(urb, GFP_ATOMIC);
|
||||
if (unlikely(err))
|
||||
goto failed;
|
||||
else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
|
||||
if (unlikely(err)) {
|
||||
can_free_echo_skb(netdev, context->echo_index, NULL);
|
||||
|
||||
usb_unanchor_urb(urb);
|
||||
usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
|
||||
|
||||
atomic_dec(&priv->active_tx_urbs);
|
||||
|
||||
if (err == -ENODEV)
|
||||
netif_device_detach(netdev);
|
||||
else
|
||||
netdev_warn(netdev, "failed tx_urb %d\n", err);
|
||||
stats->tx_dropped++;
|
||||
} else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
|
||||
/* Slow down tx path */
|
||||
netif_stop_queue(netdev);
|
||||
|
||||
|
@ -684,19 +695,6 @@ nofreecontext:
|
|||
|
||||
return NETDEV_TX_BUSY;
|
||||
|
||||
failed:
|
||||
can_free_echo_skb(netdev, context->echo_index, NULL);
|
||||
|
||||
usb_unanchor_urb(urb);
|
||||
usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
|
||||
|
||||
atomic_dec(&priv->active_tx_urbs);
|
||||
|
||||
if (err == -ENODEV)
|
||||
netif_device_detach(netdev);
|
||||
else
|
||||
netdev_warn(netdev, "failed tx_urb %d\n", err);
|
||||
|
||||
nomembuf:
|
||||
usb_free_urb(urb);
|
||||
|
||||
|
|
|
@ -1928,6 +1928,10 @@ static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
|
|||
case FLOW_ACTION_GATE:
|
||||
size = struct_size(sgi, entries, a->gate.num_entries);
|
||||
sgi = kzalloc(size, GFP_KERNEL);
|
||||
if (!sgi) {
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
vsc9959_psfp_parse_gate(a, sgi);
|
||||
ret = vsc9959_psfp_sgi_table_add(ocelot, sgi);
|
||||
if (ret) {
|
||||
|
|
|
@ -845,6 +845,7 @@ struct hnae3_handle {
|
|||
struct dentry *hnae3_dbgfs;
|
||||
/* protects concurrent contention between debugfs commands */
|
||||
struct mutex dbgfs_lock;
|
||||
char **dbgfs_buf;
|
||||
|
||||
/* Network interface message level enabled bits */
|
||||
u32 msg_enable;
|
||||
|
|
|
@ -1227,7 +1227,7 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
|
|||
return ret;
|
||||
|
||||
mutex_lock(&handle->dbgfs_lock);
|
||||
save_buf = &hns3_dbg_cmd[index].buf;
|
||||
save_buf = &handle->dbgfs_buf[index];
|
||||
|
||||
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
|
||||
test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
|
||||
|
@ -1332,6 +1332,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
|
|||
int ret;
|
||||
u32 i;
|
||||
|
||||
handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
|
||||
ARRAY_SIZE(hns3_dbg_cmd),
|
||||
sizeof(*handle->dbgfs_buf),
|
||||
GFP_KERNEL);
|
||||
if (!handle->dbgfs_buf)
|
||||
return -ENOMEM;
|
||||
|
||||
hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
|
||||
debugfs_create_dir(name, hns3_dbgfs_root);
|
||||
handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
|
||||
|
@ -1380,9 +1387,9 @@ void hns3_dbg_uninit(struct hnae3_handle *handle)
|
|||
u32 i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
|
||||
if (hns3_dbg_cmd[i].buf) {
|
||||
kvfree(hns3_dbg_cmd[i].buf);
|
||||
hns3_dbg_cmd[i].buf = NULL;
|
||||
if (handle->dbgfs_buf[i]) {
|
||||
kvfree(handle->dbgfs_buf[i]);
|
||||
handle->dbgfs_buf[i] = NULL;
|
||||
}
|
||||
|
||||
mutex_destroy(&handle->dbgfs_lock);
|
||||
|
|
|
@ -49,7 +49,6 @@ struct hns3_dbg_cmd_info {
|
|||
enum hnae3_dbg_cmd cmd;
|
||||
enum hns3_dbg_dentry_type dentry;
|
||||
u32 buf_len;
|
||||
char *buf;
|
||||
int (*init)(struct hnae3_handle *handle, unsigned int cmd);
|
||||
};
|
||||
|
||||
|
|
|
@ -10323,11 +10323,11 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
|
|||
}
|
||||
|
||||
if (!ret) {
|
||||
if (is_kill)
|
||||
hclge_rm_vport_vlan_table(vport, vlan_id, false);
|
||||
else
|
||||
if (!is_kill)
|
||||
hclge_add_vport_vlan_table(vport, vlan_id,
|
||||
writen_to_tbl);
|
||||
else if (is_kill && vlan_id != 0)
|
||||
hclge_rm_vport_vlan_table(vport, vlan_id, false);
|
||||
} else if (is_kill) {
|
||||
/* when remove hw vlan filter failed, record the vlan id,
|
||||
* and try to remove it from hw later, to be consistence
|
||||
|
|
|
@ -710,7 +710,7 @@ static inline struct xsk_buff_pool *ice_tx_xsk_pool(struct ice_tx_ring *ring)
|
|||
struct ice_vsi *vsi = ring->vsi;
|
||||
u16 qid;
|
||||
|
||||
qid = ring->q_index - vsi->num_xdp_txq;
|
||||
qid = ring->q_index - vsi->alloc_txq;
|
||||
|
||||
if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps))
|
||||
return NULL;
|
||||
|
|
|
@ -608,6 +608,9 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
|
|||
*/
|
||||
dma_rmb();
|
||||
|
||||
if (unlikely(rx_ring->next_to_clean == rx_ring->next_to_use))
|
||||
break;
|
||||
|
||||
xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean);
|
||||
|
||||
size = le16_to_cpu(rx_desc->wb.pkt_len) &
|
||||
|
@ -754,7 +757,7 @@ skip:
|
|||
next_dd = next_dd + tx_thresh;
|
||||
if (next_dd >= desc_cnt)
|
||||
next_dd = tx_thresh - 1;
|
||||
} while (budget--);
|
||||
} while (--budget);
|
||||
|
||||
xdp_ring->next_dd = next_dd;
|
||||
|
||||
|
|
|
@ -408,6 +408,9 @@ static int lan966x_port_ioctl(struct net_device *dev, struct ifreq *ifr,
|
|||
}
|
||||
}
|
||||
|
||||
if (!dev->phydev)
|
||||
return -ENODEV;
|
||||
|
||||
return phy_mii_ioctl(dev->phydev, ifr, cmd);
|
||||
}
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ config SPARX5_SWITCH
|
|||
depends on OF
|
||||
depends on ARCH_SPARX5 || COMPILE_TEST
|
||||
depends on PTP_1588_CLOCK_OPTIONAL
|
||||
depends on BRIDGE || BRIDGE=n
|
||||
select PHYLINK
|
||||
select PHY_SPARX5_SERDES
|
||||
select RESET_CONTROLLER
|
||||
|
|
|
@ -91,11 +91,9 @@ static unsigned int count_online_cores(struct efx_nic *efx, bool local_node)
|
|||
}
|
||||
|
||||
cpumask_copy(filter_mask, cpu_online_mask);
|
||||
if (local_node) {
|
||||
int numa_node = pcibus_to_node(efx->pci_dev->bus);
|
||||
|
||||
cpumask_and(filter_mask, filter_mask, cpumask_of_node(numa_node));
|
||||
}
|
||||
if (local_node)
|
||||
cpumask_and(filter_mask, filter_mask,
|
||||
cpumask_of_pcibus(efx->pci_dev->bus));
|
||||
|
||||
count = 0;
|
||||
for_each_cpu(cpu, filter_mask) {
|
||||
|
@ -386,8 +384,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
|
|||
#if defined(CONFIG_SMP)
|
||||
void efx_set_interrupt_affinity(struct efx_nic *efx)
|
||||
{
|
||||
int numa_node = pcibus_to_node(efx->pci_dev->bus);
|
||||
const struct cpumask *numa_mask = cpumask_of_node(numa_node);
|
||||
const struct cpumask *numa_mask = cpumask_of_pcibus(efx->pci_dev->bus);
|
||||
struct efx_channel *channel;
|
||||
unsigned int cpu;
|
||||
|
||||
|
|
|
@ -425,6 +425,12 @@ static int vxlan_vnifilter_dump(struct sk_buff *skb, struct netlink_callback *cb
|
|||
err = -ENODEV;
|
||||
goto out_err;
|
||||
}
|
||||
if (!netif_is_vxlan(dev)) {
|
||||
NL_SET_ERR_MSG(cb->extack,
|
||||
"The device is not a vxlan device");
|
||||
err = -EINVAL;
|
||||
goto out_err;
|
||||
}
|
||||
err = vxlan_vnifilter_dump_dev(dev, skb, cb);
|
||||
/* if the dump completed without an error we return 0 here */
|
||||
if (err != -EMSGSIZE)
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
*/
|
||||
|
||||
#include "queueing.h"
|
||||
#include <linux/skb_array.h>
|
||||
|
||||
struct multicore_worker __percpu *
|
||||
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
|
||||
|
@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
|
|||
{
|
||||
free_percpu(queue->worker);
|
||||
WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
|
||||
ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
|
||||
ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL);
|
||||
}
|
||||
|
||||
#define NEXT(skb) ((skb)->prev)
|
||||
|
|
|
@ -160,6 +160,7 @@ out:
|
|||
rcu_read_unlock_bh();
|
||||
return ret;
|
||||
#else
|
||||
kfree_skb(skb);
|
||||
return -EAFNOSUPPORT;
|
||||
#endif
|
||||
}
|
||||
|
@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
|
|||
endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
|
||||
endpoint->src4.s_addr = ip_hdr(skb)->daddr;
|
||||
endpoint->src_if4 = skb->skb_iif;
|
||||
} else if (skb->protocol == htons(ETH_P_IPV6)) {
|
||||
} else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
|
||||
endpoint->addr6.sin6_family = AF_INET6;
|
||||
endpoint->addr6.sin6_port = udp_hdr(skb)->source;
|
||||
endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
|
||||
|
@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer,
|
|||
peer->endpoint.addr4 = endpoint->addr4;
|
||||
peer->endpoint.src4 = endpoint->src4;
|
||||
peer->endpoint.src_if4 = endpoint->src_if4;
|
||||
} else if (endpoint->addr.sa_family == AF_INET6) {
|
||||
} else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
|
||||
peer->endpoint.addr6 = endpoint->addr6;
|
||||
peer->endpoint.src6 = endpoint->src6;
|
||||
} else {
|
||||
|
|
|
@ -1214,10 +1214,9 @@ ptp_ocp_nvmem_device_get(struct ptp_ocp *bp, const void * const tag)
|
|||
static inline void
|
||||
ptp_ocp_nvmem_device_put(struct nvmem_device **nvmemp)
|
||||
{
|
||||
if (*nvmemp != NULL) {
|
||||
if (!IS_ERR_OR_NULL(*nvmemp))
|
||||
nvmem_device_put(*nvmemp);
|
||||
*nvmemp = NULL;
|
||||
}
|
||||
*nvmemp = NULL;
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -1241,13 +1240,15 @@ ptp_ocp_read_eeprom(struct ptp_ocp *bp)
|
|||
}
|
||||
if (!nvmem) {
|
||||
nvmem = ptp_ocp_nvmem_device_get(bp, tag);
|
||||
if (!nvmem)
|
||||
goto out;
|
||||
if (IS_ERR(nvmem)) {
|
||||
ret = PTR_ERR(nvmem);
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
ret = nvmem_device_read(nvmem, map->off, map->len,
|
||||
BP_MAP_ENTRY_ADDR(bp, map));
|
||||
if (ret != map->len)
|
||||
goto read_fail;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
bp->has_eeprom_data = true;
|
||||
|
@ -1256,7 +1257,7 @@ out:
|
|||
ptp_ocp_nvmem_device_put(&nvmem);
|
||||
return;
|
||||
|
||||
read_fail:
|
||||
fail:
|
||||
dev_err(&bp->pdev->dev, "could not read eeprom: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <linux/ftrace.h>
|
||||
#include <linux/refcount.h>
|
||||
#include <linux/freelist.h>
|
||||
#include <linux/rethook.h>
|
||||
#include <asm/kprobes.h>
|
||||
|
||||
#ifdef CONFIG_KPROBES
|
||||
|
@ -149,13 +150,20 @@ struct kretprobe {
|
|||
int maxactive;
|
||||
int nmissed;
|
||||
size_t data_size;
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
struct rethook *rh;
|
||||
#else
|
||||
struct freelist_head freelist;
|
||||
struct kretprobe_holder *rph;
|
||||
#endif
|
||||
};
|
||||
|
||||
#define KRETPROBE_MAX_DATA_SIZE 4096
|
||||
|
||||
struct kretprobe_instance {
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
struct rethook_node node;
|
||||
#else
|
||||
union {
|
||||
struct freelist_node freelist;
|
||||
struct rcu_head rcu;
|
||||
|
@ -164,6 +172,7 @@ struct kretprobe_instance {
|
|||
struct kretprobe_holder *rph;
|
||||
kprobe_opcode_t *ret_addr;
|
||||
void *fp;
|
||||
#endif
|
||||
char data[];
|
||||
};
|
||||
|
||||
|
@ -186,10 +195,24 @@ extern void kprobe_busy_begin(void);
|
|||
extern void kprobe_busy_end(void);
|
||||
|
||||
#ifdef CONFIG_KRETPROBES
|
||||
extern void arch_prepare_kretprobe(struct kretprobe_instance *ri,
|
||||
struct pt_regs *regs);
|
||||
/* Check whether @p is used for implementing a trampoline. */
|
||||
extern int arch_trampoline_kprobe(struct kprobe *p);
|
||||
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
static nokprobe_inline struct kretprobe *get_kretprobe(struct kretprobe_instance *ri)
|
||||
{
|
||||
RCU_LOCKDEP_WARN(!rcu_read_lock_any_held(),
|
||||
"Kretprobe is accessed from instance under preemptive context");
|
||||
|
||||
return (struct kretprobe *)READ_ONCE(ri->node.rethook->data);
|
||||
}
|
||||
static nokprobe_inline unsigned long get_kretprobe_retaddr(struct kretprobe_instance *ri)
|
||||
{
|
||||
return ri->node.ret_addr;
|
||||
}
|
||||
#else
|
||||
extern void arch_prepare_kretprobe(struct kretprobe_instance *ri,
|
||||
struct pt_regs *regs);
|
||||
void arch_kretprobe_fixup_return(struct pt_regs *regs,
|
||||
kprobe_opcode_t *correct_ret_addr);
|
||||
|
||||
|
@ -232,6 +255,12 @@ static nokprobe_inline struct kretprobe *get_kretprobe(struct kretprobe_instance
|
|||
return READ_ONCE(ri->rph->rp);
|
||||
}
|
||||
|
||||
static nokprobe_inline unsigned long get_kretprobe_retaddr(struct kretprobe_instance *ri)
|
||||
{
|
||||
return (unsigned long)ri->ret_addr;
|
||||
}
|
||||
#endif /* CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
|
||||
#else /* !CONFIG_KRETPROBES */
|
||||
static inline void arch_prepare_kretprobe(struct kretprobe *rp,
|
||||
struct pt_regs *regs)
|
||||
|
@ -395,7 +424,11 @@ void unregister_kretprobe(struct kretprobe *rp);
|
|||
int register_kretprobes(struct kretprobe **rps, int num);
|
||||
void unregister_kretprobes(struct kretprobe **rps, int num);
|
||||
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
#define kprobe_flush_task(tk) do {} while (0)
|
||||
#else
|
||||
void kprobe_flush_task(struct task_struct *tk);
|
||||
#endif
|
||||
|
||||
void kprobe_free_init_mem(void);
|
||||
|
||||
|
@ -509,6 +542,19 @@ static inline bool is_kprobe_optinsn_slot(unsigned long addr)
|
|||
#endif /* !CONFIG_OPTPROBES */
|
||||
|
||||
#ifdef CONFIG_KRETPROBES
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr)
|
||||
{
|
||||
return is_rethook_trampoline(addr);
|
||||
}
|
||||
|
||||
static nokprobe_inline
|
||||
unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp,
|
||||
struct llist_node **cur)
|
||||
{
|
||||
return rethook_find_ret_addr(tsk, (unsigned long)fp, cur);
|
||||
}
|
||||
#else
|
||||
static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr)
|
||||
{
|
||||
return (void *)addr == kretprobe_trampoline_addr();
|
||||
|
@ -516,6 +562,7 @@ static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr)
|
|||
|
||||
unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp,
|
||||
struct llist_node **cur);
|
||||
#endif
|
||||
#else
|
||||
static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr)
|
||||
{
|
||||
|
|
|
@ -83,12 +83,15 @@ enum rxrpc_call_trace {
|
|||
rxrpc_call_error,
|
||||
rxrpc_call_got,
|
||||
rxrpc_call_got_kernel,
|
||||
rxrpc_call_got_timer,
|
||||
rxrpc_call_got_userid,
|
||||
rxrpc_call_new_client,
|
||||
rxrpc_call_new_service,
|
||||
rxrpc_call_put,
|
||||
rxrpc_call_put_kernel,
|
||||
rxrpc_call_put_noqueue,
|
||||
rxrpc_call_put_notimer,
|
||||
rxrpc_call_put_timer,
|
||||
rxrpc_call_put_userid,
|
||||
rxrpc_call_queued,
|
||||
rxrpc_call_queued_ref,
|
||||
|
@ -278,12 +281,15 @@ enum rxrpc_tx_point {
|
|||
EM(rxrpc_call_error, "*E*") \
|
||||
EM(rxrpc_call_got, "GOT") \
|
||||
EM(rxrpc_call_got_kernel, "Gke") \
|
||||
EM(rxrpc_call_got_timer, "GTM") \
|
||||
EM(rxrpc_call_got_userid, "Gus") \
|
||||
EM(rxrpc_call_new_client, "NWc") \
|
||||
EM(rxrpc_call_new_service, "NWs") \
|
||||
EM(rxrpc_call_put, "PUT") \
|
||||
EM(rxrpc_call_put_kernel, "Pke") \
|
||||
EM(rxrpc_call_put_noqueue, "PNQ") \
|
||||
EM(rxrpc_call_put_noqueue, "PnQ") \
|
||||
EM(rxrpc_call_put_notimer, "PnT") \
|
||||
EM(rxrpc_call_put_timer, "PTM") \
|
||||
EM(rxrpc_call_put_userid, "Pus") \
|
||||
EM(rxrpc_call_queued, "QUE") \
|
||||
EM(rxrpc_call_queued_ref, "QUR") \
|
||||
|
|
|
@ -108,6 +108,7 @@ obj-$(CONFIG_TRACING) += trace/
|
|||
obj-$(CONFIG_TRACE_CLOCK) += trace/
|
||||
obj-$(CONFIG_RING_BUFFER) += trace/
|
||||
obj-$(CONFIG_TRACEPOINTS) += trace/
|
||||
obj-$(CONFIG_RETHOOK) += trace/
|
||||
obj-$(CONFIG_IRQ_WORK) += irq_work.o
|
||||
obj-$(CONFIG_CPU_PM) += cpu_pm.o
|
||||
obj-$(CONFIG_BPF) += bpf/
|
||||
|
|
|
@ -5507,7 +5507,7 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
|
|||
}
|
||||
args = (const struct btf_param *)(func + 1);
|
||||
nargs = btf_type_vlen(func);
|
||||
if (nargs >= MAX_BPF_FUNC_ARGS) {
|
||||
if (nargs > MAX_BPF_FUNC_ARGS) {
|
||||
bpf_log(log,
|
||||
"The function %s has %d arguments. Too many.\n",
|
||||
tname, nargs);
|
||||
|
|
124
kernel/kprobes.c
124
kernel/kprobes.c
|
@ -1237,6 +1237,27 @@ void kprobes_inc_nmissed_count(struct kprobe *p)
|
|||
}
|
||||
NOKPROBE_SYMBOL(kprobes_inc_nmissed_count);
|
||||
|
||||
static struct kprobe kprobe_busy = {
|
||||
.addr = (void *) get_kprobe,
|
||||
};
|
||||
|
||||
void kprobe_busy_begin(void)
|
||||
{
|
||||
struct kprobe_ctlblk *kcb;
|
||||
|
||||
preempt_disable();
|
||||
__this_cpu_write(current_kprobe, &kprobe_busy);
|
||||
kcb = get_kprobe_ctlblk();
|
||||
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
||||
}
|
||||
|
||||
void kprobe_busy_end(void)
|
||||
{
|
||||
__this_cpu_write(current_kprobe, NULL);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
#if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
|
||||
static void free_rp_inst_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu);
|
||||
|
@ -1258,26 +1279,6 @@ static void recycle_rp_inst(struct kretprobe_instance *ri)
|
|||
}
|
||||
NOKPROBE_SYMBOL(recycle_rp_inst);
|
||||
|
||||
static struct kprobe kprobe_busy = {
|
||||
.addr = (void *) get_kprobe,
|
||||
};
|
||||
|
||||
void kprobe_busy_begin(void)
|
||||
{
|
||||
struct kprobe_ctlblk *kcb;
|
||||
|
||||
preempt_disable();
|
||||
__this_cpu_write(current_kprobe, &kprobe_busy);
|
||||
kcb = get_kprobe_ctlblk();
|
||||
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
||||
}
|
||||
|
||||
void kprobe_busy_end(void)
|
||||
{
|
||||
__this_cpu_write(current_kprobe, NULL);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
/*
|
||||
* This function is called from delayed_put_task_struct() when a task is
|
||||
* dead and cleaned up to recycle any kretprobe instances associated with
|
||||
|
@ -1327,6 +1328,7 @@ static inline void free_rp_inst(struct kretprobe *rp)
|
|||
rp->rph = NULL;
|
||||
}
|
||||
}
|
||||
#endif /* !CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
|
||||
/* Add the new probe to 'ap->list'. */
|
||||
static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
|
||||
|
@ -1925,6 +1927,7 @@ static struct notifier_block kprobe_exceptions_nb = {
|
|||
|
||||
#ifdef CONFIG_KRETPROBES
|
||||
|
||||
#if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
|
||||
/* This assumes the 'tsk' is the current task or the is not running. */
|
||||
static kprobe_opcode_t *__kretprobe_find_ret_addr(struct task_struct *tsk,
|
||||
struct llist_node **cur)
|
||||
|
@ -2087,6 +2090,57 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
|
|||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(pre_handler_kretprobe);
|
||||
#else /* CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
/*
|
||||
* This kprobe pre_handler is registered with every kretprobe. When probe
|
||||
* hits it will set up the return probe.
|
||||
*/
|
||||
static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
|
||||
{
|
||||
struct kretprobe *rp = container_of(p, struct kretprobe, kp);
|
||||
struct kretprobe_instance *ri;
|
||||
struct rethook_node *rhn;
|
||||
|
||||
rhn = rethook_try_get(rp->rh);
|
||||
if (!rhn) {
|
||||
rp->nmissed++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
ri = container_of(rhn, struct kretprobe_instance, node);
|
||||
|
||||
if (rp->entry_handler && rp->entry_handler(ri, regs))
|
||||
rethook_recycle(rhn);
|
||||
else
|
||||
rethook_hook(rhn, regs, kprobe_ftrace(p));
|
||||
|
||||
return 0;
|
||||
}
|
||||
NOKPROBE_SYMBOL(pre_handler_kretprobe);
|
||||
|
||||
static void kretprobe_rethook_handler(struct rethook_node *rh, void *data,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
struct kretprobe *rp = (struct kretprobe *)data;
|
||||
struct kretprobe_instance *ri;
|
||||
struct kprobe_ctlblk *kcb;
|
||||
|
||||
/* The data must NOT be null. This means rethook data structure is broken. */
|
||||
if (WARN_ON_ONCE(!data))
|
||||
return;
|
||||
|
||||
__this_cpu_write(current_kprobe, &rp->kp);
|
||||
kcb = get_kprobe_ctlblk();
|
||||
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
|
||||
|
||||
ri = container_of(rh, struct kretprobe_instance, node);
|
||||
rp->handler(ri, regs);
|
||||
|
||||
__this_cpu_write(current_kprobe, NULL);
|
||||
}
|
||||
NOKPROBE_SYMBOL(kretprobe_rethook_handler);
|
||||
|
||||
#endif /* !CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
|
||||
/**
|
||||
* kprobe_on_func_entry() -- check whether given address is function entry
|
||||
|
@ -2155,6 +2209,29 @@ int register_kretprobe(struct kretprobe *rp)
|
|||
rp->maxactive = num_possible_cpus();
|
||||
#endif
|
||||
}
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
rp->rh = rethook_alloc((void *)rp, kretprobe_rethook_handler);
|
||||
if (!rp->rh)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < rp->maxactive; i++) {
|
||||
inst = kzalloc(sizeof(struct kretprobe_instance) +
|
||||
rp->data_size, GFP_KERNEL);
|
||||
if (inst == NULL) {
|
||||
rethook_free(rp->rh);
|
||||
rp->rh = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
rethook_add_node(rp->rh, &inst->node);
|
||||
}
|
||||
rp->nmissed = 0;
|
||||
/* Establish function entry probe point */
|
||||
ret = register_kprobe(&rp->kp);
|
||||
if (ret != 0) {
|
||||
rethook_free(rp->rh);
|
||||
rp->rh = NULL;
|
||||
}
|
||||
#else /* !CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
rp->freelist.head = NULL;
|
||||
rp->rph = kzalloc(sizeof(struct kretprobe_holder), GFP_KERNEL);
|
||||
if (!rp->rph)
|
||||
|
@ -2179,6 +2256,7 @@ int register_kretprobe(struct kretprobe *rp)
|
|||
ret = register_kprobe(&rp->kp);
|
||||
if (ret != 0)
|
||||
free_rp_inst(rp);
|
||||
#endif
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(register_kretprobe);
|
||||
|
@ -2217,7 +2295,11 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
|
|||
for (i = 0; i < num; i++) {
|
||||
if (__unregister_kprobe_top(&rps[i]->kp) < 0)
|
||||
rps[i]->kp.addr = NULL;
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
rethook_free(rps[i]->rh);
|
||||
#else
|
||||
rps[i]->rph->rp = NULL;
|
||||
#endif
|
||||
}
|
||||
mutex_unlock(&kprobe_mutex);
|
||||
|
||||
|
@ -2225,7 +2307,9 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
|
|||
for (i = 0; i < num; i++) {
|
||||
if (rps[i]->kp.addr) {
|
||||
__unregister_kprobe_bottom(&rps[i]->kp);
|
||||
#ifndef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
free_rp_inst(rps[i]);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -150,15 +150,15 @@ static int fprobe_init_rethook(struct fprobe *fp, int num)
|
|||
|
||||
fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler);
|
||||
for (i = 0; i < size; i++) {
|
||||
struct rethook_node *node;
|
||||
struct fprobe_rethook_node *node;
|
||||
|
||||
node = kzalloc(sizeof(struct fprobe_rethook_node), GFP_KERNEL);
|
||||
node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (!node) {
|
||||
rethook_free(fp->rethook);
|
||||
fp->rethook = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
rethook_add_node(fp->rethook, node);
|
||||
rethook_add_node(fp->rethook, &node->node);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -215,7 +215,7 @@ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter
|
|||
* correctly calculate the total number of filtered symbols
|
||||
* from both filter and notfilter.
|
||||
*/
|
||||
hash = fp->ops.local_hash.filter_hash;
|
||||
hash = rcu_access_pointer(fp->ops.local_hash.filter_hash);
|
||||
if (WARN_ON_ONCE(!hash))
|
||||
goto out;
|
||||
|
||||
|
|
|
@ -1433,7 +1433,7 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
|
|||
fbuffer.regs = regs;
|
||||
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
|
||||
entry->func = (unsigned long)tk->rp.kp.addr;
|
||||
entry->ret_ip = (unsigned long)ri->ret_addr;
|
||||
entry->ret_ip = get_kretprobe_retaddr(ri);
|
||||
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize);
|
||||
|
||||
trace_event_buffer_commit(&fbuffer);
|
||||
|
@ -1628,7 +1628,7 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
|
|||
return;
|
||||
|
||||
entry->func = (unsigned long)tk->rp.kp.addr;
|
||||
entry->ret_ip = (unsigned long)ri->ret_addr;
|
||||
entry->ret_ip = get_kretprobe_retaddr(ri);
|
||||
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize);
|
||||
perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
|
||||
head, NULL);
|
||||
|
|
|
@ -991,10 +991,6 @@ static int ax25_release(struct socket *sock)
|
|||
sock_orphan(sk);
|
||||
ax25 = sk_to_ax25(sk);
|
||||
ax25_dev = ax25->ax25_dev;
|
||||
if (ax25_dev) {
|
||||
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
||||
ax25_dev_put(ax25_dev);
|
||||
}
|
||||
|
||||
if (sk->sk_type == SOCK_SEQPACKET) {
|
||||
switch (ax25->state) {
|
||||
|
@ -1056,6 +1052,15 @@ static int ax25_release(struct socket *sock)
|
|||
sk->sk_state_change(sk);
|
||||
ax25_destroy_socket(ax25);
|
||||
}
|
||||
if (ax25_dev) {
|
||||
del_timer_sync(&ax25->timer);
|
||||
del_timer_sync(&ax25->t1timer);
|
||||
del_timer_sync(&ax25->t2timer);
|
||||
del_timer_sync(&ax25->t3timer);
|
||||
del_timer_sync(&ax25->idletimer);
|
||||
dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
|
||||
ax25_dev_put(ax25_dev);
|
||||
}
|
||||
|
||||
sock->sk = NULL;
|
||||
release_sock(sk);
|
||||
|
|
|
@ -1050,7 +1050,7 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
|
|||
int noblock = flags & MSG_DONTWAIT;
|
||||
int ret = 0;
|
||||
|
||||
if (flags & ~(MSG_DONTWAIT | MSG_TRUNC))
|
||||
if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
|
||||
return -EINVAL;
|
||||
|
||||
if (!so->bound)
|
||||
|
|
|
@ -1539,8 +1539,8 @@ static int clone_execute(struct datapath *dp, struct sk_buff *skb,
|
|||
pr_warn("%s: deferred action limit reached, drop sample action\n",
|
||||
ovs_dp_name(dp));
|
||||
} else { /* Recirc action */
|
||||
pr_warn("%s: deferred action limit reached, drop recirc action\n",
|
||||
ovs_dp_name(dp));
|
||||
pr_warn("%s: deferred action limit reached, drop recirc action (recirc_id=%#x)\n",
|
||||
ovs_dp_name(dp), recirc_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2230,8 +2230,8 @@ static int __ovs_nla_put_key(const struct sw_flow_key *swkey,
|
|||
icmpv6_key->icmpv6_type = ntohs(output->tp.src);
|
||||
icmpv6_key->icmpv6_code = ntohs(output->tp.dst);
|
||||
|
||||
if (icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_SOLICITATION ||
|
||||
icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_ADVERTISEMENT) {
|
||||
if (swkey->tp.src == htons(NDISC_NEIGHBOUR_SOLICITATION) ||
|
||||
swkey->tp.src == htons(NDISC_NEIGHBOUR_ADVERTISEMENT)) {
|
||||
struct ovs_key_nd *nd_key;
|
||||
|
||||
nla = nla_reserve(skb, OVS_KEY_ATTR_ND, sizeof(*nd_key));
|
||||
|
|
|
@ -777,14 +777,12 @@ void rxrpc_propose_ACK(struct rxrpc_call *, u8, u32, bool, bool,
|
|||
enum rxrpc_propose_ack_trace);
|
||||
void rxrpc_process_call(struct work_struct *);
|
||||
|
||||
static inline void rxrpc_reduce_call_timer(struct rxrpc_call *call,
|
||||
unsigned long expire_at,
|
||||
unsigned long now,
|
||||
enum rxrpc_timer_trace why)
|
||||
{
|
||||
trace_rxrpc_timer(call, why, now);
|
||||
timer_reduce(&call->timer, expire_at);
|
||||
}
|
||||
void rxrpc_reduce_call_timer(struct rxrpc_call *call,
|
||||
unsigned long expire_at,
|
||||
unsigned long now,
|
||||
enum rxrpc_timer_trace why);
|
||||
|
||||
void rxrpc_delete_call_timer(struct rxrpc_call *call);
|
||||
|
||||
/*
|
||||
* call_object.c
|
||||
|
@ -808,6 +806,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
|
|||
bool __rxrpc_queue_call(struct rxrpc_call *);
|
||||
bool rxrpc_queue_call(struct rxrpc_call *);
|
||||
void rxrpc_see_call(struct rxrpc_call *);
|
||||
bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op);
|
||||
void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
|
||||
void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
|
||||
void rxrpc_cleanup_call(struct rxrpc_call *);
|
||||
|
|
|
@ -310,7 +310,7 @@ recheck_state:
|
|||
}
|
||||
|
||||
if (call->state == RXRPC_CALL_COMPLETE) {
|
||||
del_timer_sync(&call->timer);
|
||||
rxrpc_delete_call_timer(call);
|
||||
goto out_put;
|
||||
}
|
||||
|
||||
|
|
|
@ -53,10 +53,30 @@ static void rxrpc_call_timer_expired(struct timer_list *t)
|
|||
|
||||
if (call->state < RXRPC_CALL_COMPLETE) {
|
||||
trace_rxrpc_timer(call, rxrpc_timer_expired, jiffies);
|
||||
rxrpc_queue_call(call);
|
||||
__rxrpc_queue_call(call);
|
||||
} else {
|
||||
rxrpc_put_call(call, rxrpc_call_put);
|
||||
}
|
||||
}
|
||||
|
||||
void rxrpc_reduce_call_timer(struct rxrpc_call *call,
|
||||
unsigned long expire_at,
|
||||
unsigned long now,
|
||||
enum rxrpc_timer_trace why)
|
||||
{
|
||||
if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) {
|
||||
trace_rxrpc_timer(call, why, now);
|
||||
if (timer_reduce(&call->timer, expire_at))
|
||||
rxrpc_put_call(call, rxrpc_call_put_notimer);
|
||||
}
|
||||
}
|
||||
|
||||
void rxrpc_delete_call_timer(struct rxrpc_call *call)
|
||||
{
|
||||
if (del_timer_sync(&call->timer))
|
||||
rxrpc_put_call(call, rxrpc_call_put_timer);
|
||||
}
|
||||
|
||||
static struct lock_class_key rxrpc_call_user_mutex_lock_class_key;
|
||||
|
||||
/*
|
||||
|
@ -463,6 +483,17 @@ void rxrpc_see_call(struct rxrpc_call *call)
|
|||
}
|
||||
}
|
||||
|
||||
bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
|
||||
{
|
||||
const void *here = __builtin_return_address(0);
|
||||
int n = atomic_fetch_add_unless(&call->usage, 1, 0);
|
||||
|
||||
if (n == 0)
|
||||
return false;
|
||||
trace_rxrpc_call(call->debug_id, op, n, here, NULL);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Note the addition of a ref on a call.
|
||||
*/
|
||||
|
@ -510,8 +541,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
|
|||
spin_unlock_bh(&call->lock);
|
||||
|
||||
rxrpc_put_call_slot(call);
|
||||
|
||||
del_timer_sync(&call->timer);
|
||||
rxrpc_delete_call_timer(call);
|
||||
|
||||
/* Make sure we don't get any more notifications */
|
||||
write_lock_bh(&rx->recvmsg_lock);
|
||||
|
@ -618,6 +648,8 @@ static void rxrpc_destroy_call(struct work_struct *work)
|
|||
struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor);
|
||||
struct rxrpc_net *rxnet = call->rxnet;
|
||||
|
||||
rxrpc_delete_call_timer(call);
|
||||
|
||||
rxrpc_put_connection(call->conn);
|
||||
rxrpc_put_peer(call->peer);
|
||||
kfree(call->rxtx_buffer);
|
||||
|
@ -652,8 +684,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
|
|||
|
||||
memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
|
||||
|
||||
del_timer_sync(&call->timer);
|
||||
|
||||
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
|
||||
ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
|
||||
|
||||
|
|
|
@ -84,6 +84,9 @@ static int rxrpc_preparse_s(struct key_preparsed_payload *prep)
|
|||
|
||||
prep->payload.data[1] = (struct rxrpc_security *)sec;
|
||||
|
||||
if (!sec->preparse_server_key)
|
||||
return -EINVAL;
|
||||
|
||||
return sec->preparse_server_key(prep);
|
||||
}
|
||||
|
||||
|
@ -91,7 +94,7 @@ static void rxrpc_free_preparse_s(struct key_preparsed_payload *prep)
|
|||
{
|
||||
const struct rxrpc_security *sec = prep->payload.data[1];
|
||||
|
||||
if (sec)
|
||||
if (sec && sec->free_preparse_server_key)
|
||||
sec->free_preparse_server_key(prep);
|
||||
}
|
||||
|
||||
|
@ -99,7 +102,7 @@ static void rxrpc_destroy_s(struct key *key)
|
|||
{
|
||||
const struct rxrpc_security *sec = key->payload.data[1];
|
||||
|
||||
if (sec)
|
||||
if (sec && sec->destroy_server_key)
|
||||
sec->destroy_server_key(key);
|
||||
}
|
||||
|
||||
|
|
|
@ -591,9 +591,13 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
|
|||
u32 nb_entries1 = 0, nb_entries2;
|
||||
|
||||
if (unlikely(pool->dma_need_sync)) {
|
||||
struct xdp_buff *buff;
|
||||
|
||||
/* Slow path */
|
||||
*xdp = xp_alloc(pool);
|
||||
return !!*xdp;
|
||||
buff = xp_alloc(pool);
|
||||
if (buff)
|
||||
*xdp = buff;
|
||||
return !!buff;
|
||||
}
|
||||
|
||||
if (unlikely(pool->free_list_cnt)) {
|
||||
|
|
|
@ -207,7 +207,10 @@ static void probe_unprivileged_disabled(void)
|
|||
printf("bpf() syscall for unprivileged users is enabled\n");
|
||||
break;
|
||||
case 1:
|
||||
printf("bpf() syscall restricted to privileged users\n");
|
||||
printf("bpf() syscall restricted to privileged users (without recovery)\n");
|
||||
break;
|
||||
case 2:
|
||||
printf("bpf() syscall restricted to privileged users (admin can change)\n");
|
||||
break;
|
||||
case -1:
|
||||
printf("Unable to retrieve required privileges for bpf() syscall\n");
|
||||
|
|
|
@ -477,7 +477,7 @@ static void codegen_asserts(struct bpf_object *obj, const char *obj_name)
|
|||
codegen("\
|
||||
\n\
|
||||
__attribute__((unused)) static void \n\
|
||||
%1$s__assert(struct %1$s *s) \n\
|
||||
%1$s__assert(struct %1$s *s __attribute__((unused))) \n\
|
||||
{ \n\
|
||||
#ifdef __cplusplus \n\
|
||||
#define _Static_assert static_assert \n\
|
||||
|
|
|
@ -3009,8 +3009,8 @@ union bpf_attr {
|
|||
*
|
||||
* # sysctl kernel.perf_event_max_stack=<new value>
|
||||
* Return
|
||||
* A non-negative value equal to or less than *size* on success,
|
||||
* or a negative error in case of failure.
|
||||
* The non-negative copied *buf* length equal to or less than
|
||||
* *size* on success, or a negative error in case of failure.
|
||||
*
|
||||
* long bpf_skb_load_bytes_relative(const void *skb, u32 offset, void *to, u32 len, u32 start_header)
|
||||
* Description
|
||||
|
@ -4316,8 +4316,8 @@ union bpf_attr {
|
|||
*
|
||||
* # sysctl kernel.perf_event_max_stack=<new value>
|
||||
* Return
|
||||
* A non-negative value equal to or less than *size* on success,
|
||||
* or a negative error in case of failure.
|
||||
* The non-negative copied *buf* length equal to or less than
|
||||
* *size* on success, or a negative error in case of failure.
|
||||
*
|
||||
* long bpf_load_hdr_opt(struct bpf_sock_ops *skops, void *searchby_res, u32 len, u64 flags)
|
||||
* Description
|
||||
|
|
|
@ -29,11 +29,8 @@ static void get_stack_print_output(void *ctx, int cpu, void *data, __u32 size)
|
|||
*/
|
||||
struct get_stack_trace_t e;
|
||||
int i, num_stack;
|
||||
static __u64 cnt;
|
||||
struct ksym *ks;
|
||||
|
||||
cnt++;
|
||||
|
||||
memset(&e, 0, sizeof(e));
|
||||
memcpy(&e, data, size <= sizeof(e) ? size : sizeof(e));
|
||||
|
||||
|
|
|
@ -39,16 +39,8 @@ struct {
|
|||
__type(value, stack_trace_t);
|
||||
} stack_amap SEC(".maps");
|
||||
|
||||
/* taken from /sys/kernel/debug/tracing/events/random/urandom_read/format */
|
||||
struct random_urandom_args {
|
||||
unsigned long long pad;
|
||||
int got_bits;
|
||||
int pool_left;
|
||||
int input_left;
|
||||
};
|
||||
|
||||
SEC("tracepoint/random/urandom_read")
|
||||
int oncpu(struct random_urandom_args *args)
|
||||
SEC("kprobe/urandom_read")
|
||||
int oncpu(struct pt_regs *args)
|
||||
{
|
||||
__u32 max_len = sizeof(struct bpf_stack_build_id)
|
||||
* PERF_MAX_STACK_DEPTH;
|
||||
|
|
|
@ -209,7 +209,8 @@ static void test_lpm_order(void)
|
|||
static void test_lpm_map(int keysize)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = BPF_F_NO_PREALLOC);
|
||||
size_t i, j, n_matches, n_matches_after_delete, n_nodes, n_lookups;
|
||||
volatile size_t n_matches, n_matches_after_delete;
|
||||
size_t i, j, n_nodes, n_lookups;
|
||||
struct tlpm_node *t, *list = NULL;
|
||||
struct bpf_lpm_trie_key *key;
|
||||
uint8_t *data, *value;
|
||||
|
|
|
@ -56,26 +56,14 @@ static void print_banner(void)
|
|||
|
||||
static void seed_rng(void)
|
||||
{
|
||||
int fd;
|
||||
struct {
|
||||
int entropy_count;
|
||||
int buffer_size;
|
||||
unsigned char buffer[256];
|
||||
} entropy = {
|
||||
.entropy_count = sizeof(entropy.buffer) * 8,
|
||||
.buffer_size = sizeof(entropy.buffer),
|
||||
.buffer = "Adding real entropy is not actually important for these tests. Don't try this at home, kids!"
|
||||
};
|
||||
int bits = 256, fd;
|
||||
|
||||
if (mknod("/dev/urandom", S_IFCHR | 0644, makedev(1, 9)))
|
||||
panic("mknod(/dev/urandom)");
|
||||
fd = open("/dev/urandom", O_WRONLY);
|
||||
pretty_message("[+] Fake seeding RNG...");
|
||||
fd = open("/dev/random", O_WRONLY);
|
||||
if (fd < 0)
|
||||
panic("open(urandom)");
|
||||
for (int i = 0; i < 256; ++i) {
|
||||
if (ioctl(fd, RNDADDENTROPY, &entropy) < 0)
|
||||
panic("ioctl(urandom)");
|
||||
}
|
||||
panic("open(random)");
|
||||
if (ioctl(fd, RNDADDTOENTCNT, &bits) < 0)
|
||||
panic("ioctl(RNDADDTOENTCNT)");
|
||||
close(fd);
|
||||
}
|
||||
|
||||
|
@ -270,10 +258,10 @@ static void check_leaks(void)
|
|||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
seed_rng();
|
||||
ensure_console();
|
||||
print_banner();
|
||||
mount_filesystems();
|
||||
seed_rng();
|
||||
kmod_selftests();
|
||||
enable_logging();
|
||||
clear_leaks();
|
||||
|
|
Loading…
Reference in New Issue