Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching
Pull livepatching updates from Jiri Kosina: - handle 'infinitely'-long sleeping tasks, from Miroslav Benes - remove 'immediate' feature, as it turns out it doesn't provide the originally expected semantics, and brings more issues than value * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching: livepatch: add locking to force and signal functions livepatch: Remove immediate feature livepatch: force transition to finish livepatch: send a fake signal to all blocking tasks
This commit is contained in:
commit
e1c70f3238
|
@ -33,6 +33,32 @@ Description:
|
||||||
An attribute which indicates whether the patch is currently in
|
An attribute which indicates whether the patch is currently in
|
||||||
transition.
|
transition.
|
||||||
|
|
||||||
|
What: /sys/kernel/livepatch/<patch>/signal
|
||||||
|
Date: Nov 2017
|
||||||
|
KernelVersion: 4.15.0
|
||||||
|
Contact: live-patching@vger.kernel.org
|
||||||
|
Description:
|
||||||
|
A writable attribute that allows administrator to affect the
|
||||||
|
course of an existing transition. Writing 1 sends a fake
|
||||||
|
signal to all remaining blocking tasks. The fake signal
|
||||||
|
means that no proper signal is delivered (there is no data in
|
||||||
|
signal pending structures). Tasks are interrupted or woken up,
|
||||||
|
and forced to change their patched state.
|
||||||
|
|
||||||
|
What: /sys/kernel/livepatch/<patch>/force
|
||||||
|
Date: Nov 2017
|
||||||
|
KernelVersion: 4.15.0
|
||||||
|
Contact: live-patching@vger.kernel.org
|
||||||
|
Description:
|
||||||
|
A writable attribute that allows administrator to affect the
|
||||||
|
course of an existing transition. Writing 1 clears
|
||||||
|
TIF_PATCH_PENDING flag of all tasks and thus forces the tasks to
|
||||||
|
the patched or unpatched state. Administrator should not
|
||||||
|
use this feature without a clearance from a patch
|
||||||
|
distributor. Removal (rmmod) of patch modules is permanently
|
||||||
|
disabled when the feature is used. See
|
||||||
|
Documentation/livepatch/livepatch.txt for more information.
|
||||||
|
|
||||||
What: /sys/kernel/livepatch/<patch>/<object>
|
What: /sys/kernel/livepatch/<patch>/<object>
|
||||||
Date: Nov 2014
|
Date: Nov 2014
|
||||||
KernelVersion: 3.19.0
|
KernelVersion: 3.19.0
|
||||||
|
|
|
@ -72,8 +72,7 @@ example, they add a NULL pointer or a boundary check, fix a race by adding
|
||||||
a missing memory barrier, or add some locking around a critical section.
|
a missing memory barrier, or add some locking around a critical section.
|
||||||
Most of these changes are self contained and the function presents itself
|
Most of these changes are self contained and the function presents itself
|
||||||
the same way to the rest of the system. In this case, the functions might
|
the same way to the rest of the system. In this case, the functions might
|
||||||
be updated independently one by one. (This can be done by setting the
|
be updated independently one by one.
|
||||||
'immediate' flag in the klp_patch struct.)
|
|
||||||
|
|
||||||
But there are more complex fixes. For example, a patch might change
|
But there are more complex fixes. For example, a patch might change
|
||||||
ordering of locking in multiple functions at the same time. Or a patch
|
ordering of locking in multiple functions at the same time. Or a patch
|
||||||
|
@ -125,12 +124,6 @@ safe to patch tasks:
|
||||||
b) Patching CPU-bound user tasks. If the task is highly CPU-bound
|
b) Patching CPU-bound user tasks. If the task is highly CPU-bound
|
||||||
then it will get patched the next time it gets interrupted by an
|
then it will get patched the next time it gets interrupted by an
|
||||||
IRQ.
|
IRQ.
|
||||||
c) In the future it could be useful for applying patches for
|
|
||||||
architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In
|
|
||||||
this case you would have to signal most of the tasks on the
|
|
||||||
system. However this isn't supported yet because there's
|
|
||||||
currently no way to patch kthreads without
|
|
||||||
HAVE_RELIABLE_STACKTRACE.
|
|
||||||
|
|
||||||
3. For idle "swapper" tasks, since they don't ever exit the kernel, they
|
3. For idle "swapper" tasks, since they don't ever exit the kernel, they
|
||||||
instead have a klp_update_patch_state() call in the idle loop which
|
instead have a klp_update_patch_state() call in the idle loop which
|
||||||
|
@ -138,27 +131,16 @@ safe to patch tasks:
|
||||||
|
|
||||||
(Note there's not yet such an approach for kthreads.)
|
(Note there's not yet such an approach for kthreads.)
|
||||||
|
|
||||||
All the above approaches may be skipped by setting the 'immediate' flag
|
Architectures which don't have HAVE_RELIABLE_STACKTRACE solely rely on
|
||||||
in the 'klp_patch' struct, which will disable per-task consistency and
|
the second approach. It's highly likely that some tasks may still be
|
||||||
patch all tasks immediately. This can be useful if the patch doesn't
|
running with an old version of the function, until that function
|
||||||
change any function or data semantics. Note that, even with this flag
|
returns. In this case you would have to signal the tasks. This
|
||||||
set, it's possible that some tasks may still be running with an old
|
especially applies to kthreads. They may not be woken up and would need
|
||||||
version of the function, until that function returns.
|
to be forced. See below for more information.
|
||||||
|
|
||||||
There's also an 'immediate' flag in the 'klp_func' struct which allows
|
Unless we can come up with another way to patch kthreads, architectures
|
||||||
you to specify that certain functions in the patch can be applied
|
without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
|
||||||
without per-task consistency. This might be useful if you want to patch
|
the kernel livepatching.
|
||||||
a common function like schedule(), and the function change doesn't need
|
|
||||||
consistency but the rest of the patch does.
|
|
||||||
|
|
||||||
For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
|
|
||||||
must set patch->immediate which causes all tasks to be patched
|
|
||||||
immediately. This option should be used with care, only when the patch
|
|
||||||
doesn't change any function or data semantics.
|
|
||||||
|
|
||||||
In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
|
|
||||||
may be allowed to use per-task consistency if we can come up with
|
|
||||||
another way to patch kthreads.
|
|
||||||
|
|
||||||
The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
|
The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
|
||||||
is in transition. Only a single patch (the topmost patch on the stack)
|
is in transition. Only a single patch (the topmost patch on the stack)
|
||||||
|
@ -176,8 +158,31 @@ If a patch is in transition, this file shows 0 to indicate the task is
|
||||||
unpatched and 1 to indicate it's patched. Otherwise, if no patch is in
|
unpatched and 1 to indicate it's patched. Otherwise, if no patch is in
|
||||||
transition, it shows -1. Any tasks which are blocking the transition
|
transition, it shows -1. Any tasks which are blocking the transition
|
||||||
can be signaled with SIGSTOP and SIGCONT to force them to change their
|
can be signaled with SIGSTOP and SIGCONT to force them to change their
|
||||||
patched state.
|
patched state. This may be harmful to the system though.
|
||||||
|
/sys/kernel/livepatch/<patch>/signal attribute provides a better alternative.
|
||||||
|
Writing 1 to the attribute sends a fake signal to all remaining blocking
|
||||||
|
tasks. No proper signal is actually delivered (there is no data in signal
|
||||||
|
pending structures). Tasks are interrupted or woken up, and forced to change
|
||||||
|
their patched state.
|
||||||
|
|
||||||
|
Administrator can also affect a transition through
|
||||||
|
/sys/kernel/livepatch/<patch>/force attribute. Writing 1 there clears
|
||||||
|
TIF_PATCH_PENDING flag of all tasks and thus forces the tasks to the patched
|
||||||
|
state. Important note! The force attribute is intended for cases when the
|
||||||
|
transition gets stuck for a long time because of a blocking task. Administrator
|
||||||
|
is expected to collect all necessary data (namely stack traces of such blocking
|
||||||
|
tasks) and request a clearance from a patch distributor to force the transition.
|
||||||
|
Unauthorized usage may cause harm to the system. It depends on the nature of the
|
||||||
|
patch, which functions are (un)patched, and which functions the blocking tasks
|
||||||
|
are sleeping in (/proc/<pid>/stack may help here). Removal (rmmod) of patch
|
||||||
|
modules is permanently disabled when the force feature is used. It cannot be
|
||||||
|
guaranteed there is no task sleeping in such module. It implies unbounded
|
||||||
|
reference count if a patch module is disabled and enabled in a loop.
|
||||||
|
|
||||||
|
Moreover, the usage of force may also affect future applications of live
|
||||||
|
patches and cause even more harm to the system. Administrator should first
|
||||||
|
consider to simply cancel a transition (see above). If force is used, reboot
|
||||||
|
should be planned and no more live patches applied.
|
||||||
|
|
||||||
3.1 Adding consistency model support to new architectures
|
3.1 Adding consistency model support to new architectures
|
||||||
---------------------------------------------------------
|
---------------------------------------------------------
|
||||||
|
@ -216,13 +221,6 @@ few options:
|
||||||
a good backup option for those architectures which don't have
|
a good backup option for those architectures which don't have
|
||||||
reliable stack traces yet.
|
reliable stack traces yet.
|
||||||
|
|
||||||
In the meantime, patches for such architectures can bypass the
|
|
||||||
consistency model by setting klp_patch.immediate to true. This option
|
|
||||||
is perfectly fine for patches which don't change the semantics of the
|
|
||||||
patched functions. In practice, this is usable for ~90% of security
|
|
||||||
fixes. Use of this option also means the patch can't be unloaded after
|
|
||||||
it has been disabled.
|
|
||||||
|
|
||||||
|
|
||||||
4. Livepatch module
|
4. Livepatch module
|
||||||
===================
|
===================
|
||||||
|
@ -278,9 +276,6 @@ into three levels:
|
||||||
only for a particular object ( vmlinux or a kernel module ). Note that
|
only for a particular object ( vmlinux or a kernel module ). Note that
|
||||||
kallsyms allows for searching symbols according to the object name.
|
kallsyms allows for searching symbols according to the object name.
|
||||||
|
|
||||||
There's also an 'immediate' flag which, when set, patches the
|
|
||||||
function immediately, bypassing the consistency model safety checks.
|
|
||||||
|
|
||||||
+ struct klp_object defines an array of patched functions (struct
|
+ struct klp_object defines an array of patched functions (struct
|
||||||
klp_func) in the same object. Where the object is either vmlinux
|
klp_func) in the same object. Where the object is either vmlinux
|
||||||
(NULL) or a module name.
|
(NULL) or a module name.
|
||||||
|
@ -299,9 +294,6 @@ into three levels:
|
||||||
symbols are found. The only exception are symbols from objects
|
symbols are found. The only exception are symbols from objects
|
||||||
(kernel modules) that have not been loaded yet.
|
(kernel modules) that have not been loaded yet.
|
||||||
|
|
||||||
Setting the 'immediate' flag applies the patch to all tasks
|
|
||||||
immediately, bypassing the consistency model safety checks.
|
|
||||||
|
|
||||||
For more details on how the patch is applied on a per-task basis,
|
For more details on how the patch is applied on a per-task basis,
|
||||||
see the "Consistency model" section.
|
see the "Consistency model" section.
|
||||||
|
|
||||||
|
@ -316,14 +308,12 @@ section "Livepatch life-cycle" below for more details about these
|
||||||
two operations.
|
two operations.
|
||||||
|
|
||||||
Module removal is only safe when there are no users of the underlying
|
Module removal is only safe when there are no users of the underlying
|
||||||
functions. The immediate consistency model is not able to detect this. The
|
functions. This is the reason why the force feature permanently disables
|
||||||
code just redirects the functions at the very beginning and it does not
|
the removal. The forced tasks entered the functions but we cannot say
|
||||||
check if the functions are in use. In other words, it knows when the
|
that they returned back. Therefore it cannot be decided when the
|
||||||
functions get called but it does not know when the functions return.
|
livepatch module can be safely removed. When the system is successfully
|
||||||
Therefore it cannot be decided when the livepatch module can be safely
|
transitioned to a new patch state (patched/unpatched) without being
|
||||||
removed. This is solved by a hybrid consistency model. When the system is
|
forced it is guaranteed that no task sleeps or runs in the old code.
|
||||||
transitioned to a new patch state (patched/unpatched) it is guaranteed that
|
|
||||||
no task sleeps or runs in the old code.
|
|
||||||
|
|
||||||
|
|
||||||
5. Livepatch life-cycle
|
5. Livepatch life-cycle
|
||||||
|
@ -337,19 +327,12 @@ First, the patch is applied only when all patched symbols for already
|
||||||
loaded objects are found. The error handling is much easier if this
|
loaded objects are found. The error handling is much easier if this
|
||||||
check is done before particular functions get redirected.
|
check is done before particular functions get redirected.
|
||||||
|
|
||||||
Second, the immediate consistency model does not guarantee that anyone is not
|
Second, it might take some time until the entire system is migrated with
|
||||||
sleeping in the new code after the patch is reverted. This means that the new
|
the hybrid consistency model being used. The patch revert might block
|
||||||
code needs to stay around "forever". If the code is there, one could apply it
|
the livepatch module removal for too long. Therefore it is useful to
|
||||||
again. Therefore it makes sense to separate the operations that might be done
|
revert the patch using a separate operation that might be called
|
||||||
once and those that need to be repeated when the patch is enabled (applied)
|
explicitly. But it does not make sense to remove all information until
|
||||||
again.
|
the livepatch module is really removed.
|
||||||
|
|
||||||
Third, it might take some time until the entire system is migrated
|
|
||||||
when a more complex consistency model is used. The patch revert might
|
|
||||||
block the livepatch module removal for too long. Therefore it is useful
|
|
||||||
to revert the patch using a separate operation that might be called
|
|
||||||
explicitly. But it does not make sense to remove all information
|
|
||||||
until the livepatch module is really removed.
|
|
||||||
|
|
||||||
|
|
||||||
5.1. Registration
|
5.1. Registration
|
||||||
|
@ -435,6 +418,9 @@ Information about the registered patches can be found under
|
||||||
/sys/kernel/livepatch. The patches could be enabled and disabled
|
/sys/kernel/livepatch. The patches could be enabled and disabled
|
||||||
by writing there.
|
by writing there.
|
||||||
|
|
||||||
|
/sys/kernel/livepatch/<patch>/signal and /sys/kernel/livepatch/<patch>/force
|
||||||
|
attributes allow administrator to affect a patching operation.
|
||||||
|
|
||||||
See Documentation/ABI/testing/sysfs-kernel-livepatch for more details.
|
See Documentation/ABI/testing/sysfs-kernel-livepatch for more details.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -153,6 +153,9 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
|
||||||
if (thread_info_flags & _TIF_UPROBE)
|
if (thread_info_flags & _TIF_UPROBE)
|
||||||
uprobe_notify_resume(regs);
|
uprobe_notify_resume(regs);
|
||||||
|
|
||||||
|
if (thread_info_flags & _TIF_PATCH_PENDING)
|
||||||
|
klp_update_patch_state(current);
|
||||||
|
|
||||||
if (thread_info_flags & _TIF_SIGPENDING) {
|
if (thread_info_flags & _TIF_SIGPENDING) {
|
||||||
BUG_ON(regs != current->thread.regs);
|
BUG_ON(regs != current->thread.regs);
|
||||||
do_signal(current);
|
do_signal(current);
|
||||||
|
@ -163,9 +166,6 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
|
||||||
tracehook_notify_resume(regs);
|
tracehook_notify_resume(regs);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (thread_info_flags & _TIF_PATCH_PENDING)
|
|
||||||
klp_update_patch_state(current);
|
|
||||||
|
|
||||||
user_enter();
|
user_enter();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -153,6 +153,9 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
|
||||||
if (cached_flags & _TIF_UPROBE)
|
if (cached_flags & _TIF_UPROBE)
|
||||||
uprobe_notify_resume(regs);
|
uprobe_notify_resume(regs);
|
||||||
|
|
||||||
|
if (cached_flags & _TIF_PATCH_PENDING)
|
||||||
|
klp_update_patch_state(current);
|
||||||
|
|
||||||
/* deal with pending signal delivery */
|
/* deal with pending signal delivery */
|
||||||
if (cached_flags & _TIF_SIGPENDING)
|
if (cached_flags & _TIF_SIGPENDING)
|
||||||
do_signal(regs);
|
do_signal(regs);
|
||||||
|
@ -165,9 +168,6 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
|
||||||
if (cached_flags & _TIF_USER_RETURN_NOTIFY)
|
if (cached_flags & _TIF_USER_RETURN_NOTIFY)
|
||||||
fire_user_return_notifiers();
|
fire_user_return_notifiers();
|
||||||
|
|
||||||
if (cached_flags & _TIF_PATCH_PENDING)
|
|
||||||
klp_update_patch_state(current);
|
|
||||||
|
|
||||||
/* Disable IRQs and retry */
|
/* Disable IRQs and retry */
|
||||||
local_irq_disable();
|
local_irq_disable();
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,6 @@
|
||||||
* @new_func: pointer to the patched function code
|
* @new_func: pointer to the patched function code
|
||||||
* @old_sympos: a hint indicating which symbol position the old function
|
* @old_sympos: a hint indicating which symbol position the old function
|
||||||
* can be found (optional)
|
* can be found (optional)
|
||||||
* @immediate: patch the func immediately, bypassing safety mechanisms
|
|
||||||
* @old_addr: the address of the function being patched
|
* @old_addr: the address of the function being patched
|
||||||
* @kobj: kobject for sysfs resources
|
* @kobj: kobject for sysfs resources
|
||||||
* @stack_node: list node for klp_ops func_stack list
|
* @stack_node: list node for klp_ops func_stack list
|
||||||
|
@ -76,7 +75,6 @@ struct klp_func {
|
||||||
* in kallsyms for the given object is used.
|
* in kallsyms for the given object is used.
|
||||||
*/
|
*/
|
||||||
unsigned long old_sympos;
|
unsigned long old_sympos;
|
||||||
bool immediate;
|
|
||||||
|
|
||||||
/* internal */
|
/* internal */
|
||||||
unsigned long old_addr;
|
unsigned long old_addr;
|
||||||
|
@ -137,7 +135,6 @@ struct klp_object {
|
||||||
* struct klp_patch - patch structure for live patching
|
* struct klp_patch - patch structure for live patching
|
||||||
* @mod: reference to the live patch module
|
* @mod: reference to the live patch module
|
||||||
* @objs: object entries for kernel objects to be patched
|
* @objs: object entries for kernel objects to be patched
|
||||||
* @immediate: patch all funcs immediately, bypassing safety mechanisms
|
|
||||||
* @list: list node for global list of registered patches
|
* @list: list node for global list of registered patches
|
||||||
* @kobj: kobject for sysfs resources
|
* @kobj: kobject for sysfs resources
|
||||||
* @enabled: the patch is enabled (but operation may be incomplete)
|
* @enabled: the patch is enabled (but operation may be incomplete)
|
||||||
|
@ -147,7 +144,6 @@ struct klp_patch {
|
||||||
/* external */
|
/* external */
|
||||||
struct module *mod;
|
struct module *mod;
|
||||||
struct klp_object *objs;
|
struct klp_object *objs;
|
||||||
bool immediate;
|
|
||||||
|
|
||||||
/* internal */
|
/* internal */
|
||||||
struct list_head list;
|
struct list_head list;
|
||||||
|
|
|
@ -366,11 +366,6 @@ static int __klp_enable_patch(struct klp_patch *patch)
|
||||||
/*
|
/*
|
||||||
* A reference is taken on the patch module to prevent it from being
|
* A reference is taken on the patch module to prevent it from being
|
||||||
* unloaded.
|
* unloaded.
|
||||||
*
|
|
||||||
* Note: For immediate (no consistency model) patches we don't allow
|
|
||||||
* patch modules to unload since there is no safe/sane method to
|
|
||||||
* determine if a thread is still running in the patched code contained
|
|
||||||
* in the patch module once the ftrace registration is successful.
|
|
||||||
*/
|
*/
|
||||||
if (!try_module_get(patch->mod))
|
if (!try_module_get(patch->mod))
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
@ -454,6 +449,8 @@ EXPORT_SYMBOL_GPL(klp_enable_patch);
|
||||||
* /sys/kernel/livepatch/<patch>
|
* /sys/kernel/livepatch/<patch>
|
||||||
* /sys/kernel/livepatch/<patch>/enabled
|
* /sys/kernel/livepatch/<patch>/enabled
|
||||||
* /sys/kernel/livepatch/<patch>/transition
|
* /sys/kernel/livepatch/<patch>/transition
|
||||||
|
* /sys/kernel/livepatch/<patch>/signal
|
||||||
|
* /sys/kernel/livepatch/<patch>/force
|
||||||
* /sys/kernel/livepatch/<patch>/<object>
|
* /sys/kernel/livepatch/<patch>/<object>
|
||||||
* /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
|
* /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
|
||||||
*/
|
*/
|
||||||
|
@ -528,11 +525,73 @@ static ssize_t transition_show(struct kobject *kobj,
|
||||||
patch == klp_transition_patch);
|
patch == klp_transition_patch);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static ssize_t signal_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||||
|
const char *buf, size_t count)
|
||||||
|
{
|
||||||
|
struct klp_patch *patch;
|
||||||
|
int ret;
|
||||||
|
bool val;
|
||||||
|
|
||||||
|
ret = kstrtobool(buf, &val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (!val)
|
||||||
|
return count;
|
||||||
|
|
||||||
|
mutex_lock(&klp_mutex);
|
||||||
|
|
||||||
|
patch = container_of(kobj, struct klp_patch, kobj);
|
||||||
|
if (patch != klp_transition_patch) {
|
||||||
|
mutex_unlock(&klp_mutex);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
klp_send_signals();
|
||||||
|
|
||||||
|
mutex_unlock(&klp_mutex);
|
||||||
|
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
|
static ssize_t force_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||||
|
const char *buf, size_t count)
|
||||||
|
{
|
||||||
|
struct klp_patch *patch;
|
||||||
|
int ret;
|
||||||
|
bool val;
|
||||||
|
|
||||||
|
ret = kstrtobool(buf, &val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (!val)
|
||||||
|
return count;
|
||||||
|
|
||||||
|
mutex_lock(&klp_mutex);
|
||||||
|
|
||||||
|
patch = container_of(kobj, struct klp_patch, kobj);
|
||||||
|
if (patch != klp_transition_patch) {
|
||||||
|
mutex_unlock(&klp_mutex);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
klp_force_transition();
|
||||||
|
|
||||||
|
mutex_unlock(&klp_mutex);
|
||||||
|
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
static struct kobj_attribute enabled_kobj_attr = __ATTR_RW(enabled);
|
static struct kobj_attribute enabled_kobj_attr = __ATTR_RW(enabled);
|
||||||
static struct kobj_attribute transition_kobj_attr = __ATTR_RO(transition);
|
static struct kobj_attribute transition_kobj_attr = __ATTR_RO(transition);
|
||||||
|
static struct kobj_attribute signal_kobj_attr = __ATTR_WO(signal);
|
||||||
|
static struct kobj_attribute force_kobj_attr = __ATTR_WO(force);
|
||||||
static struct attribute *klp_patch_attrs[] = {
|
static struct attribute *klp_patch_attrs[] = {
|
||||||
&enabled_kobj_attr.attr,
|
&enabled_kobj_attr.attr,
|
||||||
&transition_kobj_attr.attr,
|
&transition_kobj_attr.attr,
|
||||||
|
&signal_kobj_attr.attr,
|
||||||
|
&force_kobj_attr.attr,
|
||||||
NULL
|
NULL
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -830,12 +889,7 @@ int klp_register_patch(struct klp_patch *patch)
|
||||||
if (!klp_initialized())
|
if (!klp_initialized())
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
/*
|
if (!klp_have_reliable_stack()) {
|
||||||
* Architectures without reliable stack traces have to set
|
|
||||||
* patch->immediate because there's currently no way to patch kthreads
|
|
||||||
* with the consistency model.
|
|
||||||
*/
|
|
||||||
if (!klp_have_reliable_stack() && !patch->immediate) {
|
|
||||||
pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
|
pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
|
||||||
return -ENOSYS;
|
return -ENOSYS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -33,6 +33,8 @@ struct klp_patch *klp_transition_patch;
|
||||||
|
|
||||||
static int klp_target_state = KLP_UNDEFINED;
|
static int klp_target_state = KLP_UNDEFINED;
|
||||||
|
|
||||||
|
static bool klp_forced = false;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This work can be performed periodically to finish patching or unpatching any
|
* This work can be performed periodically to finish patching or unpatching any
|
||||||
* "straggler" tasks which failed to transition in the first attempt.
|
* "straggler" tasks which failed to transition in the first attempt.
|
||||||
|
@ -80,7 +82,6 @@ static void klp_complete_transition(void)
|
||||||
struct klp_func *func;
|
struct klp_func *func;
|
||||||
struct task_struct *g, *task;
|
struct task_struct *g, *task;
|
||||||
unsigned int cpu;
|
unsigned int cpu;
|
||||||
bool immediate_func = false;
|
|
||||||
|
|
||||||
pr_debug("'%s': completing %s transition\n",
|
pr_debug("'%s': completing %s transition\n",
|
||||||
klp_transition_patch->mod->name,
|
klp_transition_patch->mod->name,
|
||||||
|
@ -102,16 +103,9 @@ static void klp_complete_transition(void)
|
||||||
klp_synchronize_transition();
|
klp_synchronize_transition();
|
||||||
}
|
}
|
||||||
|
|
||||||
if (klp_transition_patch->immediate)
|
klp_for_each_object(klp_transition_patch, obj)
|
||||||
goto done;
|
klp_for_each_func(obj, func)
|
||||||
|
|
||||||
klp_for_each_object(klp_transition_patch, obj) {
|
|
||||||
klp_for_each_func(obj, func) {
|
|
||||||
func->transition = false;
|
func->transition = false;
|
||||||
if (func->immediate)
|
|
||||||
immediate_func = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
|
/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
|
||||||
if (klp_target_state == KLP_PATCHED)
|
if (klp_target_state == KLP_PATCHED)
|
||||||
|
@ -130,7 +124,6 @@ static void klp_complete_transition(void)
|
||||||
task->patch_state = KLP_UNDEFINED;
|
task->patch_state = KLP_UNDEFINED;
|
||||||
}
|
}
|
||||||
|
|
||||||
done:
|
|
||||||
klp_for_each_object(klp_transition_patch, obj) {
|
klp_for_each_object(klp_transition_patch, obj) {
|
||||||
if (!klp_is_object_loaded(obj))
|
if (!klp_is_object_loaded(obj))
|
||||||
continue;
|
continue;
|
||||||
|
@ -144,13 +137,11 @@ done:
|
||||||
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* See complementary comment in __klp_enable_patch() for why we
|
* klp_forced set implies unbounded increase of module's ref count if
|
||||||
* keep the module reference for immediate patches.
|
* the module is disabled/enabled in a loop.
|
||||||
*/
|
*/
|
||||||
if (!klp_transition_patch->immediate && !immediate_func &&
|
if (!klp_forced && klp_target_state == KLP_UNPATCHED)
|
||||||
klp_target_state == KLP_UNPATCHED) {
|
|
||||||
module_put(klp_transition_patch->mod);
|
module_put(klp_transition_patch->mod);
|
||||||
}
|
|
||||||
|
|
||||||
klp_target_state = KLP_UNDEFINED;
|
klp_target_state = KLP_UNDEFINED;
|
||||||
klp_transition_patch = NULL;
|
klp_transition_patch = NULL;
|
||||||
|
@ -218,9 +209,6 @@ static int klp_check_stack_func(struct klp_func *func,
|
||||||
struct klp_ops *ops;
|
struct klp_ops *ops;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (func->immediate)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
for (i = 0; i < trace->nr_entries; i++) {
|
for (i = 0; i < trace->nr_entries; i++) {
|
||||||
address = trace->entries[i];
|
address = trace->entries[i];
|
||||||
|
|
||||||
|
@ -382,13 +370,6 @@ void klp_try_complete_transition(void)
|
||||||
|
|
||||||
WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
|
WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
|
||||||
|
|
||||||
/*
|
|
||||||
* If the patch can be applied or reverted immediately, skip the
|
|
||||||
* per-task transitions.
|
|
||||||
*/
|
|
||||||
if (klp_transition_patch->immediate)
|
|
||||||
goto success;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Try to switch the tasks to the target patch state by walking their
|
* Try to switch the tasks to the target patch state by walking their
|
||||||
* stacks and looking for any to-be-patched or to-be-unpatched
|
* stacks and looking for any to-be-patched or to-be-unpatched
|
||||||
|
@ -432,7 +413,6 @@ void klp_try_complete_transition(void)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
success:
|
|
||||||
/* we're done, now cleanup the data structures */
|
/* we're done, now cleanup the data structures */
|
||||||
klp_complete_transition();
|
klp_complete_transition();
|
||||||
}
|
}
|
||||||
|
@ -452,13 +432,6 @@ void klp_start_transition(void)
|
||||||
klp_transition_patch->mod->name,
|
klp_transition_patch->mod->name,
|
||||||
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
||||||
|
|
||||||
/*
|
|
||||||
* If the patch can be applied or reverted immediately, skip the
|
|
||||||
* per-task transitions.
|
|
||||||
*/
|
|
||||||
if (klp_transition_patch->immediate)
|
|
||||||
return;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Mark all normal tasks as needing a patch state update. They'll
|
* Mark all normal tasks as needing a patch state update. They'll
|
||||||
* switch either in klp_try_complete_transition() or as they exit the
|
* switch either in klp_try_complete_transition() or as they exit the
|
||||||
|
@ -508,13 +481,6 @@ void klp_init_transition(struct klp_patch *patch, int state)
|
||||||
pr_debug("'%s': initializing %s transition\n", patch->mod->name,
|
pr_debug("'%s': initializing %s transition\n", patch->mod->name,
|
||||||
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
|
||||||
|
|
||||||
/*
|
|
||||||
* If the patch can be applied or reverted immediately, skip the
|
|
||||||
* per-task transitions.
|
|
||||||
*/
|
|
||||||
if (patch->immediate)
|
|
||||||
return;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize all tasks to the initial patch state to prepare them for
|
* Initialize all tasks to the initial patch state to prepare them for
|
||||||
* switching to the target state.
|
* switching to the target state.
|
||||||
|
@ -608,3 +574,71 @@ void klp_copy_process(struct task_struct *child)
|
||||||
|
|
||||||
/* TIF_PATCH_PENDING gets copied in setup_thread_stack() */
|
/* TIF_PATCH_PENDING gets copied in setup_thread_stack() */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
|
||||||
|
* Kthreads with TIF_PATCH_PENDING set are woken up. Only admin can request this
|
||||||
|
* action currently.
|
||||||
|
*/
|
||||||
|
void klp_send_signals(void)
|
||||||
|
{
|
||||||
|
struct task_struct *g, *task;
|
||||||
|
|
||||||
|
pr_notice("signaling remaining tasks\n");
|
||||||
|
|
||||||
|
read_lock(&tasklist_lock);
|
||||||
|
for_each_process_thread(g, task) {
|
||||||
|
if (!klp_patch_pending(task))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* There is a small race here. We could see TIF_PATCH_PENDING
|
||||||
|
* set and decide to wake up a kthread or send a fake signal.
|
||||||
|
* Meanwhile the task could migrate itself and the action
|
||||||
|
* would be meaningless. It is not serious though.
|
||||||
|
*/
|
||||||
|
if (task->flags & PF_KTHREAD) {
|
||||||
|
/*
|
||||||
|
* Wake up a kthread which sleeps interruptedly and
|
||||||
|
* still has not been migrated.
|
||||||
|
*/
|
||||||
|
wake_up_state(task, TASK_INTERRUPTIBLE);
|
||||||
|
} else {
|
||||||
|
/*
|
||||||
|
* Send fake signal to all non-kthread tasks which are
|
||||||
|
* still not migrated.
|
||||||
|
*/
|
||||||
|
spin_lock_irq(&task->sighand->siglock);
|
||||||
|
signal_wake_up(task, 0);
|
||||||
|
spin_unlock_irq(&task->sighand->siglock);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
read_unlock(&tasklist_lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Drop TIF_PATCH_PENDING of all tasks on admin's request. This forces an
|
||||||
|
* existing transition to finish.
|
||||||
|
*
|
||||||
|
* NOTE: klp_update_patch_state(task) requires the task to be inactive or
|
||||||
|
* 'current'. This is not the case here and the consistency model could be
|
||||||
|
* broken. Administrator, who is the only one to execute the
|
||||||
|
* klp_force_transitions(), has to be aware of this.
|
||||||
|
*/
|
||||||
|
void klp_force_transition(void)
|
||||||
|
{
|
||||||
|
struct task_struct *g, *task;
|
||||||
|
unsigned int cpu;
|
||||||
|
|
||||||
|
pr_warn("forcing remaining tasks to the patched state\n");
|
||||||
|
|
||||||
|
read_lock(&tasklist_lock);
|
||||||
|
for_each_process_thread(g, task)
|
||||||
|
klp_update_patch_state(task);
|
||||||
|
read_unlock(&tasklist_lock);
|
||||||
|
|
||||||
|
for_each_possible_cpu(cpu)
|
||||||
|
klp_update_patch_state(idle_task(cpu));
|
||||||
|
|
||||||
|
klp_forced = true;
|
||||||
|
}
|
||||||
|
|
|
@ -11,5 +11,7 @@ void klp_cancel_transition(void);
|
||||||
void klp_start_transition(void);
|
void klp_start_transition(void);
|
||||||
void klp_try_complete_transition(void);
|
void klp_try_complete_transition(void);
|
||||||
void klp_reverse_transition(void);
|
void klp_reverse_transition(void);
|
||||||
|
void klp_send_signals(void);
|
||||||
|
void klp_force_transition(void);
|
||||||
|
|
||||||
#endif /* _LIVEPATCH_TRANSITION_H */
|
#endif /* _LIVEPATCH_TRANSITION_H */
|
||||||
|
|
|
@ -40,6 +40,7 @@
|
||||||
#include <linux/cn_proc.h>
|
#include <linux/cn_proc.h>
|
||||||
#include <linux/compiler.h>
|
#include <linux/compiler.h>
|
||||||
#include <linux/posix-timers.h>
|
#include <linux/posix-timers.h>
|
||||||
|
#include <linux/livepatch.h>
|
||||||
|
|
||||||
#define CREATE_TRACE_POINTS
|
#define CREATE_TRACE_POINTS
|
||||||
#include <trace/events/signal.h>
|
#include <trace/events/signal.h>
|
||||||
|
@ -165,7 +166,8 @@ void recalc_sigpending_and_wake(struct task_struct *t)
|
||||||
|
|
||||||
void recalc_sigpending(void)
|
void recalc_sigpending(void)
|
||||||
{
|
{
|
||||||
if (!recalc_sigpending_tsk(current) && !freezing(current))
|
if (!recalc_sigpending_tsk(current) && !freezing(current) &&
|
||||||
|
!klp_patch_pending(current))
|
||||||
clear_thread_flag(TIF_SIGPENDING);
|
clear_thread_flag(TIF_SIGPENDING);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -197,21 +197,6 @@ static int livepatch_callbacks_demo_init(void)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!klp_have_reliable_stack() && !patch.immediate) {
|
|
||||||
/*
|
|
||||||
* WARNING: Be very careful when using 'patch.immediate' in
|
|
||||||
* your patches. It's ok to use it for simple patches like
|
|
||||||
* this, but for more complex patches which change function
|
|
||||||
* semantics, locking semantics, or data structures, it may not
|
|
||||||
* be safe. Use of this option will also prevent removal of
|
|
||||||
* the patch.
|
|
||||||
*
|
|
||||||
* See Documentation/livepatch/livepatch.txt for more details.
|
|
||||||
*/
|
|
||||||
patch.immediate = true;
|
|
||||||
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = klp_register_patch(&patch);
|
ret = klp_register_patch(&patch);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -71,21 +71,6 @@ static int livepatch_init(void)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!klp_have_reliable_stack() && !patch.immediate) {
|
|
||||||
/*
|
|
||||||
* WARNING: Be very careful when using 'patch.immediate' in
|
|
||||||
* your patches. It's ok to use it for simple patches like
|
|
||||||
* this, but for more complex patches which change function
|
|
||||||
* semantics, locking semantics, or data structures, it may not
|
|
||||||
* be safe. Use of this option will also prevent removal of
|
|
||||||
* the patch.
|
|
||||||
*
|
|
||||||
* See Documentation/livepatch/livepatch.txt for more details.
|
|
||||||
*/
|
|
||||||
patch.immediate = true;
|
|
||||||
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = klp_register_patch(&patch);
|
ret = klp_register_patch(&patch);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -133,21 +133,6 @@ static int livepatch_shadow_fix1_init(void)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!klp_have_reliable_stack() && !patch.immediate) {
|
|
||||||
/*
|
|
||||||
* WARNING: Be very careful when using 'patch.immediate' in
|
|
||||||
* your patches. It's ok to use it for simple patches like
|
|
||||||
* this, but for more complex patches which change function
|
|
||||||
* semantics, locking semantics, or data structures, it may not
|
|
||||||
* be safe. Use of this option will also prevent removal of
|
|
||||||
* the patch.
|
|
||||||
*
|
|
||||||
* See Documentation/livepatch/livepatch.txt for more details.
|
|
||||||
*/
|
|
||||||
patch.immediate = true;
|
|
||||||
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = klp_register_patch(&patch);
|
ret = klp_register_patch(&patch);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -128,21 +128,6 @@ static int livepatch_shadow_fix2_init(void)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!klp_have_reliable_stack() && !patch.immediate) {
|
|
||||||
/*
|
|
||||||
* WARNING: Be very careful when using 'patch.immediate' in
|
|
||||||
* your patches. It's ok to use it for simple patches like
|
|
||||||
* this, but for more complex patches which change function
|
|
||||||
* semantics, locking semantics, or data structures, it may not
|
|
||||||
* be safe. Use of this option will also prevent removal of
|
|
||||||
* the patch.
|
|
||||||
*
|
|
||||||
* See Documentation/livepatch/livepatch.txt for more details.
|
|
||||||
*/
|
|
||||||
patch.immediate = true;
|
|
||||||
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = klp_register_patch(&patch);
|
ret = klp_register_patch(&patch);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
Loading…
Reference in New Issue