New tracing features:
- PERAMAENT flag to ftrace_ops when attaching a callback to a function As /proc/sys/kernel/ftrace_enabled when set to zero will disable all attached callbacks in ftrace, this has a detrimental impact on live kernel tracing, as it disables all that it patched. If a ftrace_ops is registered to ftrace with the PERMANENT flag set, it will prevent ftrace_enabled from being disabled, and if ftrace_enabled is already disabled, it will prevent a ftrace_ops with PREMANENT flag set from being registered. - New register_ftrace_direct(). As eBPF would like to register its own trampolines to be called by the ftrace nop locations directly, without going through the ftrace trampoline, this function has been added. This allows for eBPF trampolines to live along side of ftrace, perf, kprobe and live patching. It also utilizes the ftrace enabled_functions file that keeps track of functions that have been modified in the kernel, to allow for security auditing. - Allow for kernel internal use of ftrace instances. Subsystems in the kernel can now create and destroy their own tracing instances which allows them to have their own tracing buffer, and be able to record events without worrying about other users from writing over their data. - New seq_buf_hex_dump() that lets users use the hex_dump() in their seq_buf usage. - Notifications now added to tracing_max_latency to allow user space to know when a new max latency is hit by one of the latency tracers. - Wider spread use of generic compare operations for use of bsearch and friends. - More synthetic event fields may be defined (32 up from 16) - Use of xarray for architectures with sparse system calls, for the system call trace events. This along with small clean ups and fixes. -----BEGIN PGP SIGNATURE----- iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXdwv4BQccm9zdGVkdEBn b29kbWlzLm9yZwAKCRAp5XQQmuv6qnB5AP91vsdHQjwE1+/UWG/cO+qFtKvn2QJK QmBRIJNH/s+1TAD/fAOhgw+ojSK3o/qc+NpvPTEW9AEwcJL1wacJUn+XbQc= =ztql -----END PGP SIGNATURE----- Merge tag 'trace-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "New tracing features: - New PERMANENT flag to ftrace_ops when attaching a callback to a function. As /proc/sys/kernel/ftrace_enabled when set to zero will disable all attached callbacks in ftrace, this has a detrimental impact on live kernel tracing, as it disables all that it patched. If a ftrace_ops is registered to ftrace with the PERMANENT flag set, it will prevent ftrace_enabled from being disabled, and if ftrace_enabled is already disabled, it will prevent a ftrace_ops with PREMANENT flag set from being registered. - New register_ftrace_direct(). As eBPF would like to register its own trampolines to be called by the ftrace nop locations directly, without going through the ftrace trampoline, this function has been added. This allows for eBPF trampolines to live along side of ftrace, perf, kprobe and live patching. It also utilizes the ftrace enabled_functions file that keeps track of functions that have been modified in the kernel, to allow for security auditing. - Allow for kernel internal use of ftrace instances. Subsystems in the kernel can now create and destroy their own tracing instances which allows them to have their own tracing buffer, and be able to record events without worrying about other users from writing over their data. - New seq_buf_hex_dump() that lets users use the hex_dump() in their seq_buf usage. - Notifications now added to tracing_max_latency to allow user space to know when a new max latency is hit by one of the latency tracers. - Wider spread use of generic compare operations for use of bsearch and friends. - More synthetic event fields may be defined (32 up from 16) - Use of xarray for architectures with sparse system calls, for the system call trace events. This along with small clean ups and fixes" * tag 'trace-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (51 commits) tracing: Enable syscall optimization for MIPS tracing: Use xarray for syscall trace events tracing: Sample module to demonstrate kernel access to Ftrace instances. tracing: Adding new functions for kernel access to Ftrace instances tracing: Fix Kconfig indentation ring-buffer: Fix typos in function ring_buffer_producer ftrace: Use BIT() macro ftrace: Return ENOTSUPP when DYNAMIC_FTRACE_WITH_DIRECT_CALLS is not configured ftrace: Rename ftrace_graph_stub to ftrace_stub_graph ftrace: Add a helper function to modify_ftrace_direct() to allow arch optimization ftrace: Add helper find_direct_entry() to consolidate code ftrace: Add another check for match in register_ftrace_direct() ftrace: Fix accounting bug with direct->count in register_ftrace_direct() ftrace/selftests: Fix spelling mistake "wakeing" -> "waking" tracing: Increase SYNTH_FIELDS_MAX for synthetic_events ftrace/samples: Add a sample module that implements modify_ftrace_direct() ftrace: Add modify_ftrace_direct() tracing: Add missing "inline" in stub function of latency_fsnotify() tracing: Remove stray tab in TRACE_EVAL_MAP_FILE's help text tracing: Use seq_buf_hex_dump() to dump buffers ...
This commit is contained in:
commit
95f1fa9e34
|
@ -146,7 +146,7 @@ FTRACE_OPS_FL_RECURSION_SAFE
|
|||
itself or any nested functions that those functions call.
|
||||
|
||||
If this flag is set, it is possible that the callback will also
|
||||
be called with preemption enabled (when CONFIG_PREEMPT is set),
|
||||
be called with preemption enabled (when CONFIG_PREEMPTION is set),
|
||||
but this is not guaranteed.
|
||||
|
||||
FTRACE_OPS_FL_IPMODIFY
|
||||
|
@ -170,6 +170,14 @@ FTRACE_OPS_FL_RCU
|
|||
a callback may be executed and RCU synchronization will not protect
|
||||
it.
|
||||
|
||||
FTRACE_OPS_FL_PERMANENT
|
||||
If this is set on any ftrace ops, then the tracing cannot disabled by
|
||||
writing 0 to the proc sysctl ftrace_enabled. Equally, a callback with
|
||||
the flag set cannot be registered if ftrace_enabled is 0.
|
||||
|
||||
Livepatch uses it not to lose the function redirection, so the system
|
||||
stays protected.
|
||||
|
||||
|
||||
Filtering which functions to trace
|
||||
==================================
|
||||
|
|
|
@ -2976,7 +2976,9 @@ Note, the proc sysctl ftrace_enable is a big on/off switch for the
|
|||
function tracer. By default it is enabled (when function tracing is
|
||||
enabled in the kernel). If it is disabled, all function tracing is
|
||||
disabled. This includes not only the function tracers for ftrace, but
|
||||
also for any other uses (perf, kprobes, stack tracing, profiling, etc).
|
||||
also for any other uses (perf, kprobes, stack tracing, profiling, etc). It
|
||||
cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
|
||||
registered.
|
||||
|
||||
Please disable this with care.
|
||||
|
||||
|
|
|
@ -939,6 +939,14 @@ config RELR
|
|||
config ARCH_HAS_MEM_ENCRYPT
|
||||
bool
|
||||
|
||||
config HAVE_SPARSE_SYSCALL_NR
|
||||
bool
|
||||
help
|
||||
An architecture should select this if its syscall numbering is sparse
|
||||
to save space. For example, MIPS architecture has a syscall array with
|
||||
entries at 4000, 5000 and 6000 locations. This option turns on syscall
|
||||
related optimizations for a given architecture.
|
||||
|
||||
source "kernel/gcov/Kconfig"
|
||||
|
||||
source "scripts/gcc-plugins/Kconfig"
|
||||
|
|
|
@ -74,6 +74,7 @@ config MIPS
|
|||
select HAVE_PERF_EVENTS
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_RSEQ
|
||||
select HAVE_SPARSE_SYSCALL_NR
|
||||
select HAVE_STACKPROTECTOR
|
||||
select HAVE_SYSCALL_TRACEPOINTS
|
||||
select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP
|
||||
|
|
|
@ -111,6 +111,11 @@ void __stack_chk_fail(void)
|
|||
error("stack-protector: Kernel stack is corrupted\n");
|
||||
}
|
||||
|
||||
/* Needed because vmlinux.lds.h references this */
|
||||
void ftrace_stub(void)
|
||||
{
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SUPERH64
|
||||
#define stackalign 8
|
||||
#else
|
||||
|
|
|
@ -157,6 +157,7 @@ config X86
|
|||
select HAVE_DMA_CONTIGUOUS
|
||||
select HAVE_DYNAMIC_FTRACE
|
||||
select HAVE_DYNAMIC_FTRACE_WITH_REGS
|
||||
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
select HAVE_EBPF_JIT
|
||||
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
||||
select HAVE_EISA
|
||||
|
|
|
@ -28,6 +28,19 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
|
|||
return addr;
|
||||
}
|
||||
|
||||
/*
|
||||
* When a ftrace registered caller is tracing a function that is
|
||||
* also set by a register_ftrace_direct() call, it needs to be
|
||||
* differentiated in the ftrace_caller trampoline. To do this, we
|
||||
* place the direct caller in the ORIG_AX part of pt_regs. This
|
||||
* tells the ftrace_caller that there's a direct caller.
|
||||
*/
|
||||
static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
|
||||
{
|
||||
/* Emulate a call */
|
||||
regs->orig_ax = addr;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
|
||||
struct dyn_arch_ftrace {
|
||||
|
|
|
@ -86,6 +86,14 @@
|
|||
UNWIND_HINT sp_offset=\sp_offset
|
||||
.endm
|
||||
|
||||
.macro UNWIND_HINT_SAVE
|
||||
UNWIND_HINT type=UNWIND_HINT_TYPE_SAVE
|
||||
.endm
|
||||
|
||||
.macro UNWIND_HINT_RESTORE
|
||||
UNWIND_HINT type=UNWIND_HINT_TYPE_RESTORE
|
||||
.endm
|
||||
|
||||
#else /* !__ASSEMBLY__ */
|
||||
|
||||
#define UNWIND_HINT(sp_reg, sp_offset, type, end) \
|
||||
|
|
|
@ -1042,6 +1042,20 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
|
|||
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
|
||||
return;
|
||||
|
||||
/*
|
||||
* If the return location is actually pointing directly to
|
||||
* the start of a direct trampoline (if we trace the trampoline
|
||||
* it will still be offset by MCOUNT_INSN_SIZE), then the
|
||||
* return address is actually off by one word, and we
|
||||
* need to adjust for that.
|
||||
*/
|
||||
if (ftrace_direct_func_count) {
|
||||
if (ftrace_find_direct_func(self_addr + MCOUNT_INSN_SIZE)) {
|
||||
self_addr = *parent;
|
||||
parent++;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Protect against fault, even if it shouldn't
|
||||
* happen. This tool is too much intrusive to
|
||||
|
|
|
@ -85,6 +85,7 @@
|
|||
movq %rdi, RDI(%rsp)
|
||||
movq %r8, R8(%rsp)
|
||||
movq %r9, R9(%rsp)
|
||||
movq $0, ORIG_RAX(%rsp)
|
||||
/*
|
||||
* Save the original RBP. Even though the mcount ABI does not
|
||||
* require this, it helps out callers.
|
||||
|
@ -111,7 +112,11 @@
|
|||
subq $MCOUNT_INSN_SIZE, %rdi
|
||||
.endm
|
||||
|
||||
.macro restore_mcount_regs
|
||||
.macro restore_mcount_regs save=0
|
||||
|
||||
/* ftrace_regs_caller or frame pointers require this */
|
||||
movq RBP(%rsp), %rbp
|
||||
|
||||
movq R9(%rsp), %r9
|
||||
movq R8(%rsp), %r8
|
||||
movq RDI(%rsp), %rdi
|
||||
|
@ -120,10 +125,7 @@
|
|||
movq RCX(%rsp), %rcx
|
||||
movq RAX(%rsp), %rax
|
||||
|
||||
/* ftrace_regs_caller can modify %rbp */
|
||||
movq RBP(%rsp), %rbp
|
||||
|
||||
addq $MCOUNT_REG_SIZE, %rsp
|
||||
addq $MCOUNT_REG_SIZE-\save, %rsp
|
||||
|
||||
.endm
|
||||
|
||||
|
@ -174,6 +176,8 @@ SYM_FUNC_START(ftrace_regs_caller)
|
|||
/* Save the current flags before any operations that can change them */
|
||||
pushfq
|
||||
|
||||
UNWIND_HINT_SAVE
|
||||
|
||||
/* added 8 bytes to save flags */
|
||||
save_mcount_regs 8
|
||||
/* save_mcount_regs fills in first two parameters */
|
||||
|
@ -226,7 +230,33 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
|
|||
movq R10(%rsp), %r10
|
||||
movq RBX(%rsp), %rbx
|
||||
|
||||
restore_mcount_regs
|
||||
movq ORIG_RAX(%rsp), %rax
|
||||
movq %rax, MCOUNT_REG_SIZE-8(%rsp)
|
||||
|
||||
/* If ORIG_RAX is anything but zero, make this a call to that */
|
||||
movq ORIG_RAX(%rsp), %rax
|
||||
cmpq $0, %rax
|
||||
je 1f
|
||||
|
||||
/* Swap the flags with orig_rax */
|
||||
movq MCOUNT_REG_SIZE(%rsp), %rdi
|
||||
movq %rdi, MCOUNT_REG_SIZE-8(%rsp)
|
||||
movq %rax, MCOUNT_REG_SIZE(%rsp)
|
||||
|
||||
restore_mcount_regs 8
|
||||
|
||||
jmp 2f
|
||||
|
||||
1: restore_mcount_regs
|
||||
|
||||
|
||||
2:
|
||||
/*
|
||||
* The stack layout is nondetermistic here, depending on which path was
|
||||
* taken. This confuses objtool and ORC, rightfully so. For now,
|
||||
* pretend the stack always looks like the non-direct case.
|
||||
*/
|
||||
UNWIND_HINT_RESTORE
|
||||
|
||||
/* Restore flags */
|
||||
popfq
|
||||
|
|
|
@ -141,14 +141,23 @@
|
|||
* compiler option used. A given kernel image will only use one, AKA
|
||||
* FTRACE_CALLSITE_SECTION. We capture all of them here to avoid header
|
||||
* dependencies for FTRACE_CALLSITE_SECTION's definition.
|
||||
*
|
||||
* Need to also make ftrace_stub_graph point to ftrace_stub
|
||||
* so that the same stub location may have different protocols
|
||||
* and not mess up with C verifiers.
|
||||
*/
|
||||
#define MCOUNT_REC() . = ALIGN(8); \
|
||||
__start_mcount_loc = .; \
|
||||
KEEP(*(__mcount_loc)) \
|
||||
KEEP(*(__patchable_function_entries)) \
|
||||
__stop_mcount_loc = .;
|
||||
__stop_mcount_loc = .; \
|
||||
ftrace_stub_graph = ftrace_stub;
|
||||
#else
|
||||
#define MCOUNT_REC()
|
||||
# ifdef CONFIG_FUNCTION_TRACER
|
||||
# define MCOUNT_REC() ftrace_stub_graph = ftrace_stub;
|
||||
# else
|
||||
# define MCOUNT_REC()
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRACE_BRANCH_PROFILING
|
||||
|
|
|
@ -5,6 +5,6 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
void *bsearch(const void *key, const void *base, size_t num, size_t size,
|
||||
int (*cmp)(const void *key, const void *elt));
|
||||
cmp_func_t cmp);
|
||||
|
||||
#endif /* _LINUX_BSEARCH_H */
|
||||
|
|
|
@ -51,6 +51,7 @@ static inline void early_trace_init(void) { }
|
|||
|
||||
struct module;
|
||||
struct ftrace_hash;
|
||||
struct ftrace_direct_func;
|
||||
|
||||
#if defined(CONFIG_FUNCTION_TRACER) && defined(CONFIG_MODULES) && \
|
||||
defined(CONFIG_DYNAMIC_FTRACE)
|
||||
|
@ -142,24 +143,30 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
|
|||
* PID - Is affected by set_ftrace_pid (allows filtering on those pids)
|
||||
* RCU - Set when the ops can only be called when RCU is watching.
|
||||
* TRACE_ARRAY - The ops->private points to a trace_array descriptor.
|
||||
* PERMANENT - Set when the ops is permanent and should not be affected by
|
||||
* ftrace_enabled.
|
||||
* DIRECT - Used by the direct ftrace_ops helper for direct functions
|
||||
* (internal ftrace only, should not be used by others)
|
||||
*/
|
||||
enum {
|
||||
FTRACE_OPS_FL_ENABLED = 1 << 0,
|
||||
FTRACE_OPS_FL_DYNAMIC = 1 << 1,
|
||||
FTRACE_OPS_FL_SAVE_REGS = 1 << 2,
|
||||
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = 1 << 3,
|
||||
FTRACE_OPS_FL_RECURSION_SAFE = 1 << 4,
|
||||
FTRACE_OPS_FL_STUB = 1 << 5,
|
||||
FTRACE_OPS_FL_INITIALIZED = 1 << 6,
|
||||
FTRACE_OPS_FL_DELETED = 1 << 7,
|
||||
FTRACE_OPS_FL_ADDING = 1 << 8,
|
||||
FTRACE_OPS_FL_REMOVING = 1 << 9,
|
||||
FTRACE_OPS_FL_MODIFYING = 1 << 10,
|
||||
FTRACE_OPS_FL_ALLOC_TRAMP = 1 << 11,
|
||||
FTRACE_OPS_FL_IPMODIFY = 1 << 12,
|
||||
FTRACE_OPS_FL_PID = 1 << 13,
|
||||
FTRACE_OPS_FL_RCU = 1 << 14,
|
||||
FTRACE_OPS_FL_TRACE_ARRAY = 1 << 15,
|
||||
FTRACE_OPS_FL_ENABLED = BIT(0),
|
||||
FTRACE_OPS_FL_DYNAMIC = BIT(1),
|
||||
FTRACE_OPS_FL_SAVE_REGS = BIT(2),
|
||||
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = BIT(3),
|
||||
FTRACE_OPS_FL_RECURSION_SAFE = BIT(4),
|
||||
FTRACE_OPS_FL_STUB = BIT(5),
|
||||
FTRACE_OPS_FL_INITIALIZED = BIT(6),
|
||||
FTRACE_OPS_FL_DELETED = BIT(7),
|
||||
FTRACE_OPS_FL_ADDING = BIT(8),
|
||||
FTRACE_OPS_FL_REMOVING = BIT(9),
|
||||
FTRACE_OPS_FL_MODIFYING = BIT(10),
|
||||
FTRACE_OPS_FL_ALLOC_TRAMP = BIT(11),
|
||||
FTRACE_OPS_FL_IPMODIFY = BIT(12),
|
||||
FTRACE_OPS_FL_PID = BIT(13),
|
||||
FTRACE_OPS_FL_RCU = BIT(14),
|
||||
FTRACE_OPS_FL_TRACE_ARRAY = BIT(15),
|
||||
FTRACE_OPS_FL_PERMANENT = BIT(16),
|
||||
FTRACE_OPS_FL_DIRECT = BIT(17),
|
||||
};
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
|
@ -239,6 +246,70 @@ static inline void ftrace_free_init_mem(void) { }
|
|||
static inline void ftrace_free_mem(struct module *mod, void *start, void *end) { }
|
||||
#endif /* CONFIG_FUNCTION_TRACER */
|
||||
|
||||
struct ftrace_func_entry {
|
||||
struct hlist_node hlist;
|
||||
unsigned long ip;
|
||||
unsigned long direct; /* for direct lookup only */
|
||||
};
|
||||
|
||||
struct dyn_ftrace;
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
extern int ftrace_direct_func_count;
|
||||
int register_ftrace_direct(unsigned long ip, unsigned long addr);
|
||||
int unregister_ftrace_direct(unsigned long ip, unsigned long addr);
|
||||
int modify_ftrace_direct(unsigned long ip, unsigned long old_addr, unsigned long new_addr);
|
||||
struct ftrace_direct_func *ftrace_find_direct_func(unsigned long addr);
|
||||
int ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
|
||||
struct dyn_ftrace *rec,
|
||||
unsigned long old_addr,
|
||||
unsigned long new_addr);
|
||||
#else
|
||||
# define ftrace_direct_func_count 0
|
||||
static inline int register_ftrace_direct(unsigned long ip, unsigned long addr)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
static inline int unregister_ftrace_direct(unsigned long ip, unsigned long addr)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
static inline int modify_ftrace_direct(unsigned long ip,
|
||||
unsigned long old_addr, unsigned long new_addr)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
static inline struct ftrace_direct_func *ftrace_find_direct_func(unsigned long addr)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
static inline int ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
|
||||
struct dyn_ftrace *rec,
|
||||
unsigned long old_addr,
|
||||
unsigned long new_addr)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|
||||
|
||||
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
/*
|
||||
* This must be implemented by the architecture.
|
||||
* It is the way the ftrace direct_ops helper, when called
|
||||
* via ftrace (because there's other callbacks besides the
|
||||
* direct call), can inform the architecture's trampoline that this
|
||||
* routine has a direct caller, and what the caller is.
|
||||
*
|
||||
* For example, in x86, it returns the direct caller
|
||||
* callback function via the regs->orig_ax parameter.
|
||||
* Then in the ftrace trampoline, if this is set, it makes
|
||||
* the return from the trampoline jump to the direct caller
|
||||
* instead of going back to the function it just traced.
|
||||
*/
|
||||
static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs,
|
||||
unsigned long addr) { }
|
||||
#endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|
||||
|
||||
#ifdef CONFIG_STACK_TRACER
|
||||
|
||||
extern int stack_tracer_enabled;
|
||||
|
@ -291,8 +362,6 @@ static inline void stack_tracer_enable(void) { }
|
|||
int ftrace_arch_code_modify_prepare(void);
|
||||
int ftrace_arch_code_modify_post_process(void);
|
||||
|
||||
struct dyn_ftrace;
|
||||
|
||||
enum ftrace_bug_type {
|
||||
FTRACE_BUG_UNKNOWN,
|
||||
FTRACE_BUG_INIT,
|
||||
|
@ -330,6 +399,7 @@ bool is_ftrace_trampoline(unsigned long addr);
|
|||
* REGS_EN - the function is set up to save regs.
|
||||
* IPMODIFY - the record allows for the IP address to be changed.
|
||||
* DISABLED - the record is not ready to be touched yet
|
||||
* DIRECT - there is a direct function to call
|
||||
*
|
||||
* When a new ftrace_ops is registered and wants a function to save
|
||||
* pt_regs, the rec->flag REGS is set. When the function has been
|
||||
|
@ -345,10 +415,12 @@ enum {
|
|||
FTRACE_FL_TRAMP_EN = (1UL << 27),
|
||||
FTRACE_FL_IPMODIFY = (1UL << 26),
|
||||
FTRACE_FL_DISABLED = (1UL << 25),
|
||||
FTRACE_FL_DIRECT = (1UL << 24),
|
||||
FTRACE_FL_DIRECT_EN = (1UL << 23),
|
||||
};
|
||||
|
||||
#define FTRACE_REF_MAX_SHIFT 25
|
||||
#define FTRACE_FL_BITS 7
|
||||
#define FTRACE_REF_MAX_SHIFT 23
|
||||
#define FTRACE_FL_BITS 9
|
||||
#define FTRACE_FL_MASKED_BITS ((1UL << FTRACE_FL_BITS) - 1)
|
||||
#define FTRACE_FL_MASK (FTRACE_FL_MASKED_BITS << FTRACE_REF_MAX_SHIFT)
|
||||
#define FTRACE_REF_MAX ((1UL << FTRACE_REF_MAX_SHIFT) - 1)
|
||||
|
|
|
@ -125,6 +125,9 @@ extern int seq_buf_putmem(struct seq_buf *s, const void *mem, unsigned int len);
|
|||
extern int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
|
||||
unsigned int len);
|
||||
extern int seq_buf_path(struct seq_buf *s, const struct path *path, const char *esc);
|
||||
extern int seq_buf_hex_dump(struct seq_buf *s, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii);
|
||||
|
||||
#ifdef CONFIG_BINARY_PRINTF
|
||||
extern int
|
||||
|
|
|
@ -5,12 +5,12 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
void sort_r(void *base, size_t num, size_t size,
|
||||
int (*cmp)(const void *, const void *, const void *),
|
||||
void (*swap)(void *, void *, int),
|
||||
cmp_r_func_t cmp_func,
|
||||
swap_func_t swap_func,
|
||||
const void *priv);
|
||||
|
||||
void sort(void *base, size_t num, size_t size,
|
||||
int (*cmp)(const void *, const void *),
|
||||
void (*swap)(void *, void *, int));
|
||||
cmp_func_t cmp_func,
|
||||
swap_func_t swap_func);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -24,6 +24,14 @@ struct trace_export {
|
|||
int register_ftrace_export(struct trace_export *export);
|
||||
int unregister_ftrace_export(struct trace_export *export);
|
||||
|
||||
struct trace_array;
|
||||
|
||||
void trace_printk_init_buffers(void);
|
||||
int trace_array_printk(struct trace_array *tr, unsigned long ip,
|
||||
const char *fmt, ...);
|
||||
void trace_array_put(struct trace_array *tr);
|
||||
struct trace_array *trace_array_get_by_name(const char *name);
|
||||
int trace_array_destroy(struct trace_array *tr);
|
||||
#endif /* CONFIG_TRACING */
|
||||
|
||||
#endif /* _LINUX_TRACE_H */
|
||||
|
|
|
@ -45,6 +45,11 @@ const char *trace_print_array_seq(struct trace_seq *p,
|
|||
const void *buf, int count,
|
||||
size_t el_size);
|
||||
|
||||
const char *
|
||||
trace_print_hex_dump_seq(struct trace_seq *p, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii);
|
||||
|
||||
struct trace_iterator;
|
||||
struct trace_event;
|
||||
|
||||
|
@ -550,7 +555,8 @@ extern int trace_event_get_offsets(struct trace_event_call *call);
|
|||
|
||||
int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set);
|
||||
int trace_set_clr_event(const char *system, const char *event, int set);
|
||||
|
||||
int trace_array_set_clr_event(struct trace_array *tr, const char *system,
|
||||
const char *event, bool enable);
|
||||
/*
|
||||
* The double __builtin_constant_p is because gcc will give us an error
|
||||
* if we try to allocate the static variable to fmt if it is not a
|
||||
|
|
|
@ -92,6 +92,10 @@ extern int trace_seq_path(struct trace_seq *s, const struct path *path);
|
|||
extern void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
|
||||
int nmaskbits);
|
||||
|
||||
extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii);
|
||||
|
||||
#else /* CONFIG_TRACING */
|
||||
static inline void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
|
||||
{
|
||||
|
|
|
@ -225,5 +225,10 @@ struct callback_head {
|
|||
typedef void (*rcu_callback_t)(struct rcu_head *head);
|
||||
typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func);
|
||||
|
||||
typedef void (*swap_func_t)(void *a, void *b, int size);
|
||||
|
||||
typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv);
|
||||
typedef int (*cmp_func_t)(const void *a, const void *b);
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _LINUX_TYPES_H */
|
||||
|
|
|
@ -340,6 +340,12 @@ TRACE_MAKE_SYSTEM_STR();
|
|||
trace_print_array_seq(p, array, count, el_size); \
|
||||
})
|
||||
|
||||
#undef __print_hex_dump
|
||||
#define __print_hex_dump(prefix_str, prefix_type, \
|
||||
rowsize, groupsize, buf, len, ascii) \
|
||||
trace_print_hex_dump_seq(p, prefix_str, prefix_type, \
|
||||
rowsize, groupsize, buf, len, ascii)
|
||||
|
||||
#undef DECLARE_EVENT_CLASS
|
||||
#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
|
||||
static notrace enum print_line_t \
|
||||
|
|
|
@ -196,7 +196,8 @@ static int klp_patch_func(struct klp_func *func)
|
|||
ops->fops.func = klp_ftrace_handler;
|
||||
ops->fops.flags = FTRACE_OPS_FL_SAVE_REGS |
|
||||
FTRACE_OPS_FL_DYNAMIC |
|
||||
FTRACE_OPS_FL_IPMODIFY;
|
||||
FTRACE_OPS_FL_IPMODIFY |
|
||||
FTRACE_OPS_FL_PERMANENT;
|
||||
|
||||
list_add(&ops->node, &klp_ops);
|
||||
|
||||
|
|
|
@ -3728,7 +3728,6 @@ static int complete_formation(struct module *mod, struct load_info *info)
|
|||
|
||||
module_enable_ro(mod, false);
|
||||
module_enable_nx(mod);
|
||||
module_enable_x(mod);
|
||||
|
||||
/* Mark state as coming so strong_try_module_get() ignores us,
|
||||
* but kallsyms etc. can see us. */
|
||||
|
@ -3751,6 +3750,11 @@ static int prepare_coming_module(struct module *mod)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
/* Make module executable after ftrace is enabled */
|
||||
mutex_lock(&module_mutex);
|
||||
module_enable_x(mod);
|
||||
mutex_unlock(&module_mutex);
|
||||
|
||||
blocking_notifier_call_chain(&module_notify_list,
|
||||
MODULE_STATE_COMING, mod);
|
||||
return 0;
|
||||
|
|
|
@ -33,6 +33,9 @@ config HAVE_DYNAMIC_FTRACE
|
|||
config HAVE_DYNAMIC_FTRACE_WITH_REGS
|
||||
bool
|
||||
|
||||
config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
bool
|
||||
|
||||
config HAVE_FTRACE_MCOUNT_RECORD
|
||||
bool
|
||||
help
|
||||
|
@ -556,6 +559,11 @@ config DYNAMIC_FTRACE_WITH_REGS
|
|||
depends on DYNAMIC_FTRACE
|
||||
depends on HAVE_DYNAMIC_FTRACE_WITH_REGS
|
||||
|
||||
config DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
def_bool y
|
||||
depends on DYNAMIC_FTRACE
|
||||
depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
|
||||
config FUNCTION_PROFILER
|
||||
bool "Kernel function profiler"
|
||||
depends on FUNCTION_TRACER
|
||||
|
@ -751,9 +759,9 @@ config PREEMPTIRQ_DELAY_TEST
|
|||
configurable delay. The module busy waits for the duration of the
|
||||
critical section.
|
||||
|
||||
For example, the following invocation forces a one-time irq-disabled
|
||||
critical section for 500us:
|
||||
modprobe preemptirq_delay_test test_mode=irq delay=500000
|
||||
For example, the following invocation generates a burst of three
|
||||
irq-disabled critical sections for 500us:
|
||||
modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3
|
||||
|
||||
If unsure, say N
|
||||
|
||||
|
@ -783,7 +791,7 @@ config TRACE_EVAL_MAP_FILE
|
|||
they are needed for the "eval_map" file. Enabling this option will
|
||||
increase the memory footprint of the running kernel.
|
||||
|
||||
If unsure, say N
|
||||
If unsure, say N.
|
||||
|
||||
config GCOV_PROFILE_FTRACE
|
||||
bool "Enable GCOV profiling on ftrace subsystem"
|
||||
|
|
|
@ -332,9 +332,14 @@ int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Simply points to ftrace_stub, but with the proper protocol.
|
||||
* Defined by the linker script in linux/vmlinux.lds.h
|
||||
*/
|
||||
extern void ftrace_stub_graph(struct ftrace_graph_ret *);
|
||||
|
||||
/* The callbacks that hook a function */
|
||||
trace_func_graph_ret_t ftrace_graph_return =
|
||||
(trace_func_graph_ret_t)ftrace_stub;
|
||||
trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph;
|
||||
trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub;
|
||||
static trace_func_graph_ent_t __ftrace_graph_entry = ftrace_graph_entry_stub;
|
||||
|
||||
|
@ -614,7 +619,7 @@ void unregister_ftrace_graph(struct fgraph_ops *gops)
|
|||
goto out;
|
||||
|
||||
ftrace_graph_active--;
|
||||
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
|
||||
ftrace_graph_return = ftrace_stub_graph;
|
||||
ftrace_graph_entry = ftrace_graph_entry_stub;
|
||||
__ftrace_graph_entry = ftrace_graph_entry_stub;
|
||||
ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET);
|
||||
|
|
|
@ -326,6 +326,8 @@ int __register_ftrace_function(struct ftrace_ops *ops)
|
|||
if (ops->flags & FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED)
|
||||
ops->flags |= FTRACE_OPS_FL_SAVE_REGS;
|
||||
#endif
|
||||
if (!ftrace_enabled && (ops->flags & FTRACE_OPS_FL_PERMANENT))
|
||||
return -EBUSY;
|
||||
|
||||
if (!core_kernel_data((unsigned long)ops))
|
||||
ops->flags |= FTRACE_OPS_FL_DYNAMIC;
|
||||
|
@ -463,10 +465,10 @@ static void *function_stat_start(struct tracer_stat *trace)
|
|||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
/* function graph compares on total time */
|
||||
static int function_stat_cmp(void *p1, void *p2)
|
||||
static int function_stat_cmp(const void *p1, const void *p2)
|
||||
{
|
||||
struct ftrace_profile *a = p1;
|
||||
struct ftrace_profile *b = p2;
|
||||
const struct ftrace_profile *a = p1;
|
||||
const struct ftrace_profile *b = p2;
|
||||
|
||||
if (a->time < b->time)
|
||||
return -1;
|
||||
|
@ -477,10 +479,10 @@ static int function_stat_cmp(void *p1, void *p2)
|
|||
}
|
||||
#else
|
||||
/* not function graph compares against hits */
|
||||
static int function_stat_cmp(void *p1, void *p2)
|
||||
static int function_stat_cmp(const void *p1, const void *p2)
|
||||
{
|
||||
struct ftrace_profile *a = p1;
|
||||
struct ftrace_profile *b = p2;
|
||||
const struct ftrace_profile *a = p1;
|
||||
const struct ftrace_profile *b = p2;
|
||||
|
||||
if (a->counter < b->counter)
|
||||
return -1;
|
||||
|
@ -1018,11 +1020,6 @@ static bool update_all_ops;
|
|||
# error Dynamic ftrace depends on MCOUNT_RECORD
|
||||
#endif
|
||||
|
||||
struct ftrace_func_entry {
|
||||
struct hlist_node hlist;
|
||||
unsigned long ip;
|
||||
};
|
||||
|
||||
struct ftrace_func_probe {
|
||||
struct ftrace_probe_ops *probe_ops;
|
||||
struct ftrace_ops ops;
|
||||
|
@ -1370,23 +1367,15 @@ ftrace_hash_rec_enable_modify(struct ftrace_ops *ops, int filter_hash);
|
|||
static int ftrace_hash_ipmodify_update(struct ftrace_ops *ops,
|
||||
struct ftrace_hash *new_hash);
|
||||
|
||||
static struct ftrace_hash *
|
||||
__ftrace_hash_move(struct ftrace_hash *src)
|
||||
static struct ftrace_hash *dup_hash(struct ftrace_hash *src, int size)
|
||||
{
|
||||
struct ftrace_func_entry *entry;
|
||||
struct hlist_node *tn;
|
||||
struct hlist_head *hhd;
|
||||
struct ftrace_hash *new_hash;
|
||||
int size = src->count;
|
||||
struct hlist_head *hhd;
|
||||
struct hlist_node *tn;
|
||||
int bits = 0;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* If the new source is empty, just return the empty_hash.
|
||||
*/
|
||||
if (ftrace_hash_empty(src))
|
||||
return EMPTY_HASH;
|
||||
|
||||
/*
|
||||
* Make the hash size about 1/2 the # found
|
||||
*/
|
||||
|
@ -1411,10 +1400,23 @@ __ftrace_hash_move(struct ftrace_hash *src)
|
|||
__add_hash_entry(new_hash, entry);
|
||||
}
|
||||
}
|
||||
|
||||
return new_hash;
|
||||
}
|
||||
|
||||
static struct ftrace_hash *
|
||||
__ftrace_hash_move(struct ftrace_hash *src)
|
||||
{
|
||||
int size = src->count;
|
||||
|
||||
/*
|
||||
* If the new source is empty, just return the empty_hash.
|
||||
*/
|
||||
if (ftrace_hash_empty(src))
|
||||
return EMPTY_HASH;
|
||||
|
||||
return dup_hash(src, size);
|
||||
}
|
||||
|
||||
static int
|
||||
ftrace_hash_move(struct ftrace_ops *ops, int enable,
|
||||
struct ftrace_hash **dst, struct ftrace_hash *src)
|
||||
|
@ -1534,6 +1536,26 @@ static int ftrace_cmp_recs(const void *a, const void *b)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end)
|
||||
{
|
||||
struct ftrace_page *pg;
|
||||
struct dyn_ftrace *rec = NULL;
|
||||
struct dyn_ftrace key;
|
||||
|
||||
key.ip = start;
|
||||
key.flags = end; /* overload flags, as it is unsigned long */
|
||||
|
||||
for (pg = ftrace_pages_start; pg; pg = pg->next) {
|
||||
if (end < pg->records[0].ip ||
|
||||
start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
|
||||
continue;
|
||||
rec = bsearch(&key, pg->records, pg->index,
|
||||
sizeof(struct dyn_ftrace),
|
||||
ftrace_cmp_recs);
|
||||
}
|
||||
return rec;
|
||||
}
|
||||
|
||||
/**
|
||||
* ftrace_location_range - return the first address of a traced location
|
||||
* if it touches the given ip range
|
||||
|
@ -1548,23 +1570,11 @@ static int ftrace_cmp_recs(const void *a, const void *b)
|
|||
*/
|
||||
unsigned long ftrace_location_range(unsigned long start, unsigned long end)
|
||||
{
|
||||
struct ftrace_page *pg;
|
||||
struct dyn_ftrace *rec;
|
||||
struct dyn_ftrace key;
|
||||
|
||||
key.ip = start;
|
||||
key.flags = end; /* overload flags, as it is unsigned long */
|
||||
|
||||
for (pg = ftrace_pages_start; pg; pg = pg->next) {
|
||||
if (end < pg->records[0].ip ||
|
||||
start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
|
||||
continue;
|
||||
rec = bsearch(&key, pg->records, pg->index,
|
||||
sizeof(struct dyn_ftrace),
|
||||
ftrace_cmp_recs);
|
||||
rec = lookup_rec(start, end);
|
||||
if (rec)
|
||||
return rec->ip;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1715,6 +1725,9 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
|
|||
if (FTRACE_WARN_ON(ftrace_rec_count(rec) == FTRACE_REF_MAX))
|
||||
return false;
|
||||
|
||||
if (ops->flags & FTRACE_OPS_FL_DIRECT)
|
||||
rec->flags |= FTRACE_FL_DIRECT;
|
||||
|
||||
/*
|
||||
* If there's only a single callback registered to a
|
||||
* function, and the ops has a trampoline registered
|
||||
|
@ -1742,6 +1755,15 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
|
|||
return false;
|
||||
rec->flags--;
|
||||
|
||||
/*
|
||||
* Only the internal direct_ops should have the
|
||||
* DIRECT flag set. Thus, if it is removing a
|
||||
* function, then that function should no longer
|
||||
* be direct.
|
||||
*/
|
||||
if (ops->flags & FTRACE_OPS_FL_DIRECT)
|
||||
rec->flags &= ~FTRACE_FL_DIRECT;
|
||||
|
||||
/*
|
||||
* If the rec had REGS enabled and the ops that is
|
||||
* being removed had REGS set, then see if there is
|
||||
|
@ -2077,6 +2099,7 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
|
|||
* If enabling and the REGS flag does not match the REGS_EN, or
|
||||
* the TRAMP flag doesn't match the TRAMP_EN, then do not ignore
|
||||
* this record. Set flags to fail the compare against ENABLED.
|
||||
* Same for direct calls.
|
||||
*/
|
||||
if (flag) {
|
||||
if (!(rec->flags & FTRACE_FL_REGS) !=
|
||||
|
@ -2086,6 +2109,24 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
|
|||
if (!(rec->flags & FTRACE_FL_TRAMP) !=
|
||||
!(rec->flags & FTRACE_FL_TRAMP_EN))
|
||||
flag |= FTRACE_FL_TRAMP;
|
||||
|
||||
/*
|
||||
* Direct calls are special, as count matters.
|
||||
* We must test the record for direct, if the
|
||||
* DIRECT and DIRECT_EN do not match, but only
|
||||
* if the count is 1. That's because, if the
|
||||
* count is something other than one, we do not
|
||||
* want the direct enabled (it will be done via the
|
||||
* direct helper). But if DIRECT_EN is set, and
|
||||
* the count is not one, we need to clear it.
|
||||
*/
|
||||
if (ftrace_rec_count(rec) == 1) {
|
||||
if (!(rec->flags & FTRACE_FL_DIRECT) !=
|
||||
!(rec->flags & FTRACE_FL_DIRECT_EN))
|
||||
flag |= FTRACE_FL_DIRECT;
|
||||
} else if (rec->flags & FTRACE_FL_DIRECT_EN) {
|
||||
flag |= FTRACE_FL_DIRECT;
|
||||
}
|
||||
}
|
||||
|
||||
/* If the state of this record hasn't changed, then do nothing */
|
||||
|
@ -2110,6 +2151,25 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
|
|||
else
|
||||
rec->flags &= ~FTRACE_FL_TRAMP_EN;
|
||||
}
|
||||
if (flag & FTRACE_FL_DIRECT) {
|
||||
/*
|
||||
* If there's only one user (direct_ops helper)
|
||||
* then we can call the direct function
|
||||
* directly (no ftrace trampoline).
|
||||
*/
|
||||
if (ftrace_rec_count(rec) == 1) {
|
||||
if (rec->flags & FTRACE_FL_DIRECT)
|
||||
rec->flags |= FTRACE_FL_DIRECT_EN;
|
||||
else
|
||||
rec->flags &= ~FTRACE_FL_DIRECT_EN;
|
||||
} else {
|
||||
/*
|
||||
* Can only call directly if there's
|
||||
* only one callback to the function.
|
||||
*/
|
||||
rec->flags &= ~FTRACE_FL_DIRECT_EN;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -2139,7 +2199,7 @@ static int ftrace_check_record(struct dyn_ftrace *rec, bool enable, bool update)
|
|||
* and REGS states. The _EN flags must be disabled though.
|
||||
*/
|
||||
rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN |
|
||||
FTRACE_FL_REGS_EN);
|
||||
FTRACE_FL_REGS_EN | FTRACE_FL_DIRECT_EN);
|
||||
}
|
||||
|
||||
ftrace_bug_type = FTRACE_BUG_NOP;
|
||||
|
@ -2294,6 +2354,52 @@ ftrace_find_tramp_ops_new(struct dyn_ftrace *rec)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
/* Protected by rcu_tasks for reading, and direct_mutex for writing */
|
||||
static struct ftrace_hash *direct_functions = EMPTY_HASH;
|
||||
static DEFINE_MUTEX(direct_mutex);
|
||||
int ftrace_direct_func_count;
|
||||
|
||||
/*
|
||||
* Search the direct_functions hash to see if the given instruction pointer
|
||||
* has a direct caller attached to it.
|
||||
*/
|
||||
static unsigned long find_rec_direct(unsigned long ip)
|
||||
{
|
||||
struct ftrace_func_entry *entry;
|
||||
|
||||
entry = __ftrace_lookup_ip(direct_functions, ip);
|
||||
if (!entry)
|
||||
return 0;
|
||||
|
||||
return entry->direct;
|
||||
}
|
||||
|
||||
static void call_direct_funcs(unsigned long ip, unsigned long pip,
|
||||
struct ftrace_ops *ops, struct pt_regs *regs)
|
||||
{
|
||||
unsigned long addr;
|
||||
|
||||
addr = find_rec_direct(ip);
|
||||
if (!addr)
|
||||
return;
|
||||
|
||||
arch_ftrace_set_direct_caller(regs, addr);
|
||||
}
|
||||
|
||||
struct ftrace_ops direct_ops = {
|
||||
.func = call_direct_funcs,
|
||||
.flags = FTRACE_OPS_FL_IPMODIFY | FTRACE_OPS_FL_RECURSION_SAFE
|
||||
| FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS
|
||||
| FTRACE_OPS_FL_PERMANENT,
|
||||
};
|
||||
#else
|
||||
static inline unsigned long find_rec_direct(unsigned long ip)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|
||||
|
||||
/**
|
||||
* ftrace_get_addr_new - Get the call address to set to
|
||||
* @rec: The ftrace record descriptor
|
||||
|
@ -2307,6 +2413,15 @@ ftrace_find_tramp_ops_new(struct dyn_ftrace *rec)
|
|||
unsigned long ftrace_get_addr_new(struct dyn_ftrace *rec)
|
||||
{
|
||||
struct ftrace_ops *ops;
|
||||
unsigned long addr;
|
||||
|
||||
if ((rec->flags & FTRACE_FL_DIRECT) &&
|
||||
(ftrace_rec_count(rec) == 1)) {
|
||||
addr = find_rec_direct(rec->ip);
|
||||
if (addr)
|
||||
return addr;
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
|
||||
/* Trampolines take precedence over regs */
|
||||
if (rec->flags & FTRACE_FL_TRAMP) {
|
||||
|
@ -2339,6 +2454,15 @@ unsigned long ftrace_get_addr_new(struct dyn_ftrace *rec)
|
|||
unsigned long ftrace_get_addr_curr(struct dyn_ftrace *rec)
|
||||
{
|
||||
struct ftrace_ops *ops;
|
||||
unsigned long addr;
|
||||
|
||||
/* Direct calls take precedence over trampolines */
|
||||
if (rec->flags & FTRACE_FL_DIRECT_EN) {
|
||||
addr = find_rec_direct(rec->ip);
|
||||
if (addr)
|
||||
return addr;
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
|
||||
/* Trampolines take precedence over regs */
|
||||
if (rec->flags & FTRACE_FL_TRAMP_EN) {
|
||||
|
@ -2861,6 +2985,8 @@ static void ftrace_shutdown_sysctl(void)
|
|||
|
||||
static u64 ftrace_update_time;
|
||||
unsigned long ftrace_update_tot_cnt;
|
||||
unsigned long ftrace_number_of_pages;
|
||||
unsigned long ftrace_number_of_groups;
|
||||
|
||||
static inline int ops_traces_mod(struct ftrace_ops *ops)
|
||||
{
|
||||
|
@ -2985,6 +3111,9 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count)
|
|||
goto again;
|
||||
}
|
||||
|
||||
ftrace_number_of_pages += 1 << order;
|
||||
ftrace_number_of_groups++;
|
||||
|
||||
cnt = (PAGE_SIZE << order) / ENTRY_SIZE;
|
||||
pg->size = cnt;
|
||||
|
||||
|
@ -3040,6 +3169,8 @@ ftrace_allocate_pages(unsigned long num_to_init)
|
|||
start_pg = pg->next;
|
||||
kfree(pg);
|
||||
pg = start_pg;
|
||||
ftrace_number_of_pages -= 1 << order;
|
||||
ftrace_number_of_groups--;
|
||||
}
|
||||
pr_info("ftrace: FAILED to allocate memory for functions\n");
|
||||
return NULL;
|
||||
|
@ -3450,10 +3581,11 @@ static int t_show(struct seq_file *m, void *v)
|
|||
if (iter->flags & FTRACE_ITER_ENABLED) {
|
||||
struct ftrace_ops *ops;
|
||||
|
||||
seq_printf(m, " (%ld)%s%s",
|
||||
seq_printf(m, " (%ld)%s%s%s",
|
||||
ftrace_rec_count(rec),
|
||||
rec->flags & FTRACE_FL_REGS ? " R" : " ",
|
||||
rec->flags & FTRACE_FL_IPMODIFY ? " I" : " ");
|
||||
rec->flags & FTRACE_FL_IPMODIFY ? " I" : " ",
|
||||
rec->flags & FTRACE_FL_DIRECT ? " D" : " ");
|
||||
if (rec->flags & FTRACE_FL_TRAMP_EN) {
|
||||
ops = ftrace_find_tramp_ops_any(rec);
|
||||
if (ops) {
|
||||
|
@ -3469,6 +3601,13 @@ static int t_show(struct seq_file *m, void *v)
|
|||
} else {
|
||||
add_trampoline_func(m, NULL, rec);
|
||||
}
|
||||
if (rec->flags & FTRACE_FL_DIRECT) {
|
||||
unsigned long direct;
|
||||
|
||||
direct = find_rec_direct(rec->ip);
|
||||
if (direct)
|
||||
seq_printf(m, "\n\tdirect-->%pS", (void *)direct);
|
||||
}
|
||||
}
|
||||
|
||||
seq_putc(m, '\n');
|
||||
|
@ -4800,6 +4939,366 @@ ftrace_set_addr(struct ftrace_ops *ops, unsigned long ip, int remove,
|
|||
return ftrace_set_hash(ops, NULL, 0, ip, remove, reset, enable);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
|
||||
struct ftrace_direct_func {
|
||||
struct list_head next;
|
||||
unsigned long addr;
|
||||
int count;
|
||||
};
|
||||
|
||||
static LIST_HEAD(ftrace_direct_funcs);
|
||||
|
||||
/**
|
||||
* ftrace_find_direct_func - test an address if it is a registered direct caller
|
||||
* @addr: The address of a registered direct caller
|
||||
*
|
||||
* This searches to see if a ftrace direct caller has been registered
|
||||
* at a specific address, and if so, it returns a descriptor for it.
|
||||
*
|
||||
* This can be used by architecture code to see if an address is
|
||||
* a direct caller (trampoline) attached to a fentry/mcount location.
|
||||
* This is useful for the function_graph tracer, as it may need to
|
||||
* do adjustments if it traced a location that also has a direct
|
||||
* trampoline attached to it.
|
||||
*/
|
||||
struct ftrace_direct_func *ftrace_find_direct_func(unsigned long addr)
|
||||
{
|
||||
struct ftrace_direct_func *entry;
|
||||
bool found = false;
|
||||
|
||||
/* May be called by fgraph trampoline (protected by rcu tasks) */
|
||||
list_for_each_entry_rcu(entry, &ftrace_direct_funcs, next) {
|
||||
if (entry->addr == addr) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (found)
|
||||
return entry;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* register_ftrace_direct - Call a custom trampoline directly
|
||||
* @ip: The address of the nop at the beginning of a function
|
||||
* @addr: The address of the trampoline to call at @ip
|
||||
*
|
||||
* This is used to connect a direct call from the nop location (@ip)
|
||||
* at the start of ftrace traced functions. The location that it calls
|
||||
* (@addr) must be able to handle a direct call, and save the parameters
|
||||
* of the function being traced, and restore them (or inject new ones
|
||||
* if needed), before returning.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success
|
||||
* -EBUSY - Another direct function is already attached (there can be only one)
|
||||
* -ENODEV - @ip does not point to a ftrace nop location (or not supported)
|
||||
* -ENOMEM - There was an allocation failure.
|
||||
*/
|
||||
int register_ftrace_direct(unsigned long ip, unsigned long addr)
|
||||
{
|
||||
struct ftrace_direct_func *direct;
|
||||
struct ftrace_func_entry *entry;
|
||||
struct ftrace_hash *free_hash = NULL;
|
||||
struct dyn_ftrace *rec;
|
||||
int ret = -EBUSY;
|
||||
|
||||
mutex_lock(&direct_mutex);
|
||||
|
||||
/* See if there's a direct function at @ip already */
|
||||
if (find_rec_direct(ip))
|
||||
goto out_unlock;
|
||||
|
||||
ret = -ENODEV;
|
||||
rec = lookup_rec(ip, ip);
|
||||
if (!rec)
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* Check if the rec says it has a direct call but we didn't
|
||||
* find one earlier?
|
||||
*/
|
||||
if (WARN_ON(rec->flags & FTRACE_FL_DIRECT))
|
||||
goto out_unlock;
|
||||
|
||||
/* Make sure the ip points to the exact record */
|
||||
if (ip != rec->ip) {
|
||||
ip = rec->ip;
|
||||
/* Need to check this ip for a direct. */
|
||||
if (find_rec_direct(ip))
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
if (ftrace_hash_empty(direct_functions) ||
|
||||
direct_functions->count > 2 * (1 << direct_functions->size_bits)) {
|
||||
struct ftrace_hash *new_hash;
|
||||
int size = ftrace_hash_empty(direct_functions) ? 0 :
|
||||
direct_functions->count + 1;
|
||||
|
||||
if (size < 32)
|
||||
size = 32;
|
||||
|
||||
new_hash = dup_hash(direct_functions, size);
|
||||
if (!new_hash)
|
||||
goto out_unlock;
|
||||
|
||||
free_hash = direct_functions;
|
||||
direct_functions = new_hash;
|
||||
}
|
||||
|
||||
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
|
||||
if (!entry)
|
||||
goto out_unlock;
|
||||
|
||||
direct = ftrace_find_direct_func(addr);
|
||||
if (!direct) {
|
||||
direct = kmalloc(sizeof(*direct), GFP_KERNEL);
|
||||
if (!direct) {
|
||||
kfree(entry);
|
||||
goto out_unlock;
|
||||
}
|
||||
direct->addr = addr;
|
||||
direct->count = 0;
|
||||
list_add_rcu(&direct->next, &ftrace_direct_funcs);
|
||||
ftrace_direct_func_count++;
|
||||
}
|
||||
|
||||
entry->ip = ip;
|
||||
entry->direct = addr;
|
||||
__add_hash_entry(direct_functions, entry);
|
||||
|
||||
ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
|
||||
if (ret)
|
||||
remove_hash_entry(direct_functions, entry);
|
||||
|
||||
if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) {
|
||||
ret = register_ftrace_function(&direct_ops);
|
||||
if (ret)
|
||||
ftrace_set_filter_ip(&direct_ops, ip, 1, 0);
|
||||
}
|
||||
|
||||
if (ret) {
|
||||
kfree(entry);
|
||||
if (!direct->count) {
|
||||
list_del_rcu(&direct->next);
|
||||
synchronize_rcu_tasks();
|
||||
kfree(direct);
|
||||
if (free_hash)
|
||||
free_ftrace_hash(free_hash);
|
||||
free_hash = NULL;
|
||||
ftrace_direct_func_count--;
|
||||
}
|
||||
} else {
|
||||
direct->count++;
|
||||
}
|
||||
out_unlock:
|
||||
mutex_unlock(&direct_mutex);
|
||||
|
||||
if (free_hash) {
|
||||
synchronize_rcu_tasks();
|
||||
free_ftrace_hash(free_hash);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(register_ftrace_direct);
|
||||
|
||||
static struct ftrace_func_entry *find_direct_entry(unsigned long *ip,
|
||||
struct dyn_ftrace **recp)
|
||||
{
|
||||
struct ftrace_func_entry *entry;
|
||||
struct dyn_ftrace *rec;
|
||||
|
||||
rec = lookup_rec(*ip, *ip);
|
||||
if (!rec)
|
||||
return NULL;
|
||||
|
||||
entry = __ftrace_lookup_ip(direct_functions, rec->ip);
|
||||
if (!entry) {
|
||||
WARN_ON(rec->flags & FTRACE_FL_DIRECT);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
WARN_ON(!(rec->flags & FTRACE_FL_DIRECT));
|
||||
|
||||
/* Passed in ip just needs to be on the call site */
|
||||
*ip = rec->ip;
|
||||
|
||||
if (recp)
|
||||
*recp = rec;
|
||||
|
||||
return entry;
|
||||
}
|
||||
|
||||
int unregister_ftrace_direct(unsigned long ip, unsigned long addr)
|
||||
{
|
||||
struct ftrace_direct_func *direct;
|
||||
struct ftrace_func_entry *entry;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&direct_mutex);
|
||||
|
||||
entry = find_direct_entry(&ip, NULL);
|
||||
if (!entry)
|
||||
goto out_unlock;
|
||||
|
||||
if (direct_functions->count == 1)
|
||||
unregister_ftrace_function(&direct_ops);
|
||||
|
||||
ret = ftrace_set_filter_ip(&direct_ops, ip, 1, 0);
|
||||
|
||||
WARN_ON(ret);
|
||||
|
||||
remove_hash_entry(direct_functions, entry);
|
||||
|
||||
direct = ftrace_find_direct_func(addr);
|
||||
if (!WARN_ON(!direct)) {
|
||||
/* This is the good path (see the ! before WARN) */
|
||||
direct->count--;
|
||||
WARN_ON(direct->count < 0);
|
||||
if (!direct->count) {
|
||||
list_del_rcu(&direct->next);
|
||||
synchronize_rcu_tasks();
|
||||
kfree(direct);
|
||||
ftrace_direct_func_count--;
|
||||
}
|
||||
}
|
||||
out_unlock:
|
||||
mutex_unlock(&direct_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(unregister_ftrace_direct);
|
||||
|
||||
static struct ftrace_ops stub_ops = {
|
||||
.func = ftrace_stub,
|
||||
};
|
||||
|
||||
/**
|
||||
* ftrace_modify_direct_caller - modify ftrace nop directly
|
||||
* @entry: The ftrace hash entry of the direct helper for @rec
|
||||
* @rec: The record representing the function site to patch
|
||||
* @old_addr: The location that the site at @rec->ip currently calls
|
||||
* @new_addr: The location that the site at @rec->ip should call
|
||||
*
|
||||
* An architecture may overwrite this function to optimize the
|
||||
* changing of the direct callback on an ftrace nop location.
|
||||
* This is called with the ftrace_lock mutex held, and no other
|
||||
* ftrace callbacks are on the associated record (@rec). Thus,
|
||||
* it is safe to modify the ftrace record, where it should be
|
||||
* currently calling @old_addr directly, to call @new_addr.
|
||||
*
|
||||
* Safety checks should be made to make sure that the code at
|
||||
* @rec->ip is currently calling @old_addr. And this must
|
||||
* also update entry->direct to @new_addr.
|
||||
*/
|
||||
int __weak ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
|
||||
struct dyn_ftrace *rec,
|
||||
unsigned long old_addr,
|
||||
unsigned long new_addr)
|
||||
{
|
||||
unsigned long ip = rec->ip;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The ftrace_lock was used to determine if the record
|
||||
* had more than one registered user to it. If it did,
|
||||
* we needed to prevent that from changing to do the quick
|
||||
* switch. But if it did not (only a direct caller was attached)
|
||||
* then this function is called. But this function can deal
|
||||
* with attached callers to the rec that we care about, and
|
||||
* since this function uses standard ftrace calls that take
|
||||
* the ftrace_lock mutex, we need to release it.
|
||||
*/
|
||||
mutex_unlock(&ftrace_lock);
|
||||
|
||||
/*
|
||||
* By setting a stub function at the same address, we force
|
||||
* the code to call the iterator and the direct_ops helper.
|
||||
* This means that @ip does not call the direct call, and
|
||||
* we can simply modify it.
|
||||
*/
|
||||
ret = ftrace_set_filter_ip(&stub_ops, ip, 0, 0);
|
||||
if (ret)
|
||||
goto out_lock;
|
||||
|
||||
ret = register_ftrace_function(&stub_ops);
|
||||
if (ret) {
|
||||
ftrace_set_filter_ip(&stub_ops, ip, 1, 0);
|
||||
goto out_lock;
|
||||
}
|
||||
|
||||
entry->direct = new_addr;
|
||||
|
||||
/*
|
||||
* By removing the stub, we put back the direct call, calling
|
||||
* the @new_addr.
|
||||
*/
|
||||
unregister_ftrace_function(&stub_ops);
|
||||
ftrace_set_filter_ip(&stub_ops, ip, 1, 0);
|
||||
|
||||
out_lock:
|
||||
mutex_lock(&ftrace_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* modify_ftrace_direct - Modify an existing direct call to call something else
|
||||
* @ip: The instruction pointer to modify
|
||||
* @old_addr: The address that the current @ip calls directly
|
||||
* @new_addr: The address that the @ip should call
|
||||
*
|
||||
* This modifies a ftrace direct caller at an instruction pointer without
|
||||
* having to disable it first. The direct call will switch over to the
|
||||
* @new_addr without missing anything.
|
||||
*
|
||||
* Returns: zero on success. Non zero on error, which includes:
|
||||
* -ENODEV : the @ip given has no direct caller attached
|
||||
* -EINVAL : the @old_addr does not match the current direct caller
|
||||
*/
|
||||
int modify_ftrace_direct(unsigned long ip,
|
||||
unsigned long old_addr, unsigned long new_addr)
|
||||
{
|
||||
struct ftrace_func_entry *entry;
|
||||
struct dyn_ftrace *rec;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&direct_mutex);
|
||||
|
||||
mutex_lock(&ftrace_lock);
|
||||
entry = find_direct_entry(&ip, &rec);
|
||||
if (!entry)
|
||||
goto out_unlock;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (entry->direct != old_addr)
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* If there's no other ftrace callback on the rec->ip location,
|
||||
* then it can be changed directly by the architecture.
|
||||
* If there is another caller, then we just need to change the
|
||||
* direct caller helper to point to @new_addr.
|
||||
*/
|
||||
if (ftrace_rec_count(rec) == 1) {
|
||||
ret = ftrace_modify_direct_caller(entry, rec, old_addr, new_addr);
|
||||
} else {
|
||||
entry->direct = new_addr;
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&ftrace_lock);
|
||||
mutex_unlock(&direct_mutex);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(modify_ftrace_direct);
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|
||||
|
||||
/**
|
||||
* ftrace_set_filter_ip - set a function to filter on in ftrace by address
|
||||
* @ops - the ops to set the filter with
|
||||
|
@ -5818,6 +6317,8 @@ void ftrace_release_mod(struct module *mod)
|
|||
free_pages((unsigned long)pg->records, order);
|
||||
tmp_page = pg->next;
|
||||
kfree(pg);
|
||||
ftrace_number_of_pages -= 1 << order;
|
||||
ftrace_number_of_groups--;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -6159,6 +6660,8 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr)
|
|||
*last_pg = pg->next;
|
||||
order = get_count_order(pg->size / ENTRIES_PER_PAGE);
|
||||
free_pages((unsigned long)pg->records, order);
|
||||
ftrace_number_of_pages -= 1 << order;
|
||||
ftrace_number_of_groups--;
|
||||
kfree(pg);
|
||||
pg = container_of(last_pg, struct ftrace_page, next);
|
||||
if (!(*last_pg))
|
||||
|
@ -6214,6 +6717,9 @@ void __init ftrace_init(void)
|
|||
__start_mcount_loc,
|
||||
__stop_mcount_loc);
|
||||
|
||||
pr_info("ftrace: allocated %ld pages with %ld groups\n",
|
||||
ftrace_number_of_pages, ftrace_number_of_groups);
|
||||
|
||||
set_ftrace_early_filters();
|
||||
|
||||
return;
|
||||
|
@ -6754,6 +7260,18 @@ int unregister_ftrace_function(struct ftrace_ops *ops)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(unregister_ftrace_function);
|
||||
|
||||
static bool is_permanent_ops_registered(void)
|
||||
{
|
||||
struct ftrace_ops *op;
|
||||
|
||||
do_for_each_ftrace_op(op, ftrace_ops_list) {
|
||||
if (op->flags & FTRACE_OPS_FL_PERMANENT)
|
||||
return true;
|
||||
} while_for_each_ftrace_op(op);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
int
|
||||
ftrace_enable_sysctl(struct ctl_table *table, int write,
|
||||
void __user *buffer, size_t *lenp,
|
||||
|
@ -6771,8 +7289,6 @@ ftrace_enable_sysctl(struct ctl_table *table, int write,
|
|||
if (ret || !write || (last_ftrace_enabled == !!ftrace_enabled))
|
||||
goto out;
|
||||
|
||||
last_ftrace_enabled = !!ftrace_enabled;
|
||||
|
||||
if (ftrace_enabled) {
|
||||
|
||||
/* we are starting ftrace again */
|
||||
|
@ -6783,12 +7299,19 @@ ftrace_enable_sysctl(struct ctl_table *table, int write,
|
|||
ftrace_startup_sysctl();
|
||||
|
||||
} else {
|
||||
if (is_permanent_ops_registered()) {
|
||||
ftrace_enabled = true;
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* stopping ftrace calls (just send to ftrace_stub) */
|
||||
ftrace_trace_function = ftrace_stub;
|
||||
|
||||
ftrace_shutdown_sysctl();
|
||||
}
|
||||
|
||||
last_ftrace_enabled = !!ftrace_enabled;
|
||||
out:
|
||||
mutex_unlock(&ftrace_lock);
|
||||
return ret;
|
||||
|
|
|
@ -10,18 +10,25 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/sysfs.h>
|
||||
|
||||
static ulong delay = 100;
|
||||
static char test_mode[10] = "irq";
|
||||
static char test_mode[12] = "irq";
|
||||
static uint burst_size = 1;
|
||||
|
||||
module_param_named(delay, delay, ulong, S_IRUGO);
|
||||
module_param_string(test_mode, test_mode, 10, S_IRUGO);
|
||||
MODULE_PARM_DESC(delay, "Period in microseconds (100 uS default)");
|
||||
MODULE_PARM_DESC(test_mode, "Mode of the test such as preempt or irq (default irq)");
|
||||
module_param_named(delay, delay, ulong, 0444);
|
||||
module_param_string(test_mode, test_mode, 12, 0444);
|
||||
module_param_named(burst_size, burst_size, uint, 0444);
|
||||
MODULE_PARM_DESC(delay, "Period in microseconds (100 us default)");
|
||||
MODULE_PARM_DESC(test_mode, "Mode of the test such as preempt, irq, or alternate (default irq)");
|
||||
MODULE_PARM_DESC(burst_size, "The size of a burst (default 1)");
|
||||
|
||||
#define MIN(x, y) ((x) < (y) ? (x) : (y))
|
||||
|
||||
static void busy_wait(ulong time)
|
||||
{
|
||||
|
@ -34,37 +41,136 @@ static void busy_wait(ulong time)
|
|||
} while ((end - start) < (time * 1000));
|
||||
}
|
||||
|
||||
static int preemptirq_delay_run(void *data)
|
||||
static __always_inline void irqoff_test(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!strcmp(test_mode, "irq")) {
|
||||
local_irq_save(flags);
|
||||
busy_wait(delay);
|
||||
local_irq_restore(flags);
|
||||
} else if (!strcmp(test_mode, "preempt")) {
|
||||
}
|
||||
|
||||
static __always_inline void preemptoff_test(void)
|
||||
{
|
||||
preempt_disable();
|
||||
busy_wait(delay);
|
||||
preempt_enable();
|
||||
}
|
||||
}
|
||||
|
||||
static void execute_preemptirqtest(int idx)
|
||||
{
|
||||
if (!strcmp(test_mode, "irq"))
|
||||
irqoff_test();
|
||||
else if (!strcmp(test_mode, "preempt"))
|
||||
preemptoff_test();
|
||||
else if (!strcmp(test_mode, "alternate")) {
|
||||
if (idx % 2 == 0)
|
||||
irqoff_test();
|
||||
else
|
||||
preemptoff_test();
|
||||
}
|
||||
}
|
||||
|
||||
#define DECLARE_TESTFN(POSTFIX) \
|
||||
static void preemptirqtest_##POSTFIX(int idx) \
|
||||
{ \
|
||||
execute_preemptirqtest(idx); \
|
||||
} \
|
||||
|
||||
/*
|
||||
* We create 10 different functions, so that we can get 10 different
|
||||
* backtraces.
|
||||
*/
|
||||
DECLARE_TESTFN(0)
|
||||
DECLARE_TESTFN(1)
|
||||
DECLARE_TESTFN(2)
|
||||
DECLARE_TESTFN(3)
|
||||
DECLARE_TESTFN(4)
|
||||
DECLARE_TESTFN(5)
|
||||
DECLARE_TESTFN(6)
|
||||
DECLARE_TESTFN(7)
|
||||
DECLARE_TESTFN(8)
|
||||
DECLARE_TESTFN(9)
|
||||
|
||||
static void (*testfuncs[])(int) = {
|
||||
preemptirqtest_0,
|
||||
preemptirqtest_1,
|
||||
preemptirqtest_2,
|
||||
preemptirqtest_3,
|
||||
preemptirqtest_4,
|
||||
preemptirqtest_5,
|
||||
preemptirqtest_6,
|
||||
preemptirqtest_7,
|
||||
preemptirqtest_8,
|
||||
preemptirqtest_9,
|
||||
};
|
||||
|
||||
#define NR_TEST_FUNCS ARRAY_SIZE(testfuncs)
|
||||
|
||||
static int preemptirq_delay_run(void *data)
|
||||
{
|
||||
int i;
|
||||
int s = MIN(burst_size, NR_TEST_FUNCS);
|
||||
|
||||
for (i = 0; i < s; i++)
|
||||
(testfuncs[i])(i);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init preemptirq_delay_init(void)
|
||||
static struct task_struct *preemptirq_start_test(void)
|
||||
{
|
||||
char task_name[50];
|
||||
struct task_struct *test_task;
|
||||
|
||||
snprintf(task_name, sizeof(task_name), "%s_test", test_mode);
|
||||
return kthread_run(preemptirq_delay_run, NULL, task_name);
|
||||
}
|
||||
|
||||
test_task = kthread_run(preemptirq_delay_run, NULL, task_name);
|
||||
return PTR_ERR_OR_ZERO(test_task);
|
||||
|
||||
static ssize_t trigger_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
preemptirq_start_test();
|
||||
return count;
|
||||
}
|
||||
|
||||
static struct kobj_attribute trigger_attribute =
|
||||
__ATTR(trigger, 0200, NULL, trigger_store);
|
||||
|
||||
static struct attribute *attrs[] = {
|
||||
&trigger_attribute.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group attr_group = {
|
||||
.attrs = attrs,
|
||||
};
|
||||
|
||||
static struct kobject *preemptirq_delay_kobj;
|
||||
|
||||
static int __init preemptirq_delay_init(void)
|
||||
{
|
||||
struct task_struct *test_task;
|
||||
int retval;
|
||||
|
||||
test_task = preemptirq_start_test();
|
||||
retval = PTR_ERR_OR_ZERO(test_task);
|
||||
if (retval != 0)
|
||||
return retval;
|
||||
|
||||
preemptirq_delay_kobj = kobject_create_and_add("preemptirq_delay_test",
|
||||
kernel_kobj);
|
||||
if (!preemptirq_delay_kobj)
|
||||
return -ENOMEM;
|
||||
|
||||
retval = sysfs_create_group(preemptirq_delay_kobj, &attr_group);
|
||||
if (retval)
|
||||
kobject_put(preemptirq_delay_kobj);
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
static void __exit preemptirq_delay_exit(void)
|
||||
{
|
||||
return;
|
||||
kobject_put(preemptirq_delay_kobj);
|
||||
}
|
||||
|
||||
module_init(preemptirq_delay_init)
|
||||
|
|
|
@ -269,10 +269,10 @@ static void ring_buffer_producer(void)
|
|||
|
||||
#ifndef CONFIG_PREEMPTION
|
||||
/*
|
||||
* If we are a non preempt kernel, the 10 second run will
|
||||
* If we are a non preempt kernel, the 10 seconds run will
|
||||
* stop everything while it runs. Instead, we will call
|
||||
* cond_resched and also add any time that was lost by a
|
||||
* rescedule.
|
||||
* reschedule.
|
||||
*
|
||||
* Do a cond resched at the same frequency we would wake up
|
||||
* the reader.
|
||||
|
|
|
@ -45,6 +45,9 @@
|
|||
#include <linux/trace.h>
|
||||
#include <linux/sched/clock.h>
|
||||
#include <linux/sched/rt.h>
|
||||
#include <linux/fsnotify.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include "trace.h"
|
||||
#include "trace_output.h"
|
||||
|
@ -298,12 +301,24 @@ static void __trace_array_put(struct trace_array *this_tr)
|
|||
this_tr->ref--;
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_array_put - Decrement the reference counter for this trace array.
|
||||
*
|
||||
* NOTE: Use this when we no longer need the trace array returned by
|
||||
* trace_array_get_by_name(). This ensures the trace array can be later
|
||||
* destroyed.
|
||||
*
|
||||
*/
|
||||
void trace_array_put(struct trace_array *this_tr)
|
||||
{
|
||||
if (!this_tr)
|
||||
return;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
__trace_array_put(this_tr);
|
||||
mutex_unlock(&trace_types_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_put);
|
||||
|
||||
int tracing_check_open_get_tr(struct trace_array *tr)
|
||||
{
|
||||
|
@ -1497,6 +1512,74 @@ static ssize_t trace_seq_to_buffer(struct trace_seq *s, void *buf, size_t cnt)
|
|||
}
|
||||
|
||||
unsigned long __read_mostly tracing_thresh;
|
||||
static const struct file_operations tracing_max_lat_fops;
|
||||
|
||||
#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \
|
||||
defined(CONFIG_FSNOTIFY)
|
||||
|
||||
static struct workqueue_struct *fsnotify_wq;
|
||||
|
||||
static void latency_fsnotify_workfn(struct work_struct *work)
|
||||
{
|
||||
struct trace_array *tr = container_of(work, struct trace_array,
|
||||
fsnotify_work);
|
||||
fsnotify(tr->d_max_latency->d_inode, FS_MODIFY,
|
||||
tr->d_max_latency->d_inode, FSNOTIFY_EVENT_INODE, NULL, 0);
|
||||
}
|
||||
|
||||
static void latency_fsnotify_workfn_irq(struct irq_work *iwork)
|
||||
{
|
||||
struct trace_array *tr = container_of(iwork, struct trace_array,
|
||||
fsnotify_irqwork);
|
||||
queue_work(fsnotify_wq, &tr->fsnotify_work);
|
||||
}
|
||||
|
||||
static void trace_create_maxlat_file(struct trace_array *tr,
|
||||
struct dentry *d_tracer)
|
||||
{
|
||||
INIT_WORK(&tr->fsnotify_work, latency_fsnotify_workfn);
|
||||
init_irq_work(&tr->fsnotify_irqwork, latency_fsnotify_workfn_irq);
|
||||
tr->d_max_latency = trace_create_file("tracing_max_latency", 0644,
|
||||
d_tracer, &tr->max_latency,
|
||||
&tracing_max_lat_fops);
|
||||
}
|
||||
|
||||
__init static int latency_fsnotify_init(void)
|
||||
{
|
||||
fsnotify_wq = alloc_workqueue("tr_max_lat_wq",
|
||||
WQ_UNBOUND | WQ_HIGHPRI, 0);
|
||||
if (!fsnotify_wq) {
|
||||
pr_err("Unable to allocate tr_max_lat_wq\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
late_initcall_sync(latency_fsnotify_init);
|
||||
|
||||
void latency_fsnotify(struct trace_array *tr)
|
||||
{
|
||||
if (!fsnotify_wq)
|
||||
return;
|
||||
/*
|
||||
* We cannot call queue_work(&tr->fsnotify_work) from here because it's
|
||||
* possible that we are called from __schedule() or do_idle(), which
|
||||
* could cause a deadlock.
|
||||
*/
|
||||
irq_work_queue(&tr->fsnotify_irqwork);
|
||||
}
|
||||
|
||||
/*
|
||||
* (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \
|
||||
* defined(CONFIG_FSNOTIFY)
|
||||
*/
|
||||
#else
|
||||
|
||||
#define trace_create_maxlat_file(tr, d_tracer) \
|
||||
trace_create_file("tracing_max_latency", 0644, d_tracer, \
|
||||
&tr->max_latency, &tracing_max_lat_fops)
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
/*
|
||||
|
@ -1536,6 +1619,7 @@ __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
|
|||
|
||||
/* record this tasks comm */
|
||||
tracing_record_cmdline(tsk);
|
||||
latency_fsnotify(tr);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3225,6 +3309,9 @@ int trace_array_printk(struct trace_array *tr,
|
|||
if (!(global_trace.trace_flags & TRACE_ITER_PRINTK))
|
||||
return 0;
|
||||
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
|
||||
va_start(ap, fmt);
|
||||
ret = trace_array_vprintk(tr, ip, fmt, ap);
|
||||
va_end(ap);
|
||||
|
@ -3654,6 +3741,8 @@ print_trace_header(struct seq_file *m, struct trace_iterator *iter)
|
|||
"desktop",
|
||||
#elif defined(CONFIG_PREEMPT)
|
||||
"preempt",
|
||||
#elif defined(CONFIG_PREEMPT_RT)
|
||||
"preempt_rt",
|
||||
#else
|
||||
"unknown",
|
||||
#endif
|
||||
|
@ -4609,7 +4698,7 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
|
|||
|
||||
if (mask == TRACE_ITER_RECORD_TGID) {
|
||||
if (!tgid_map)
|
||||
tgid_map = kcalloc(PID_MAX_DEFAULT + 1,
|
||||
tgid_map = kvcalloc(PID_MAX_DEFAULT + 1,
|
||||
sizeof(*tgid_map),
|
||||
GFP_KERNEL);
|
||||
if (!tgid_map) {
|
||||
|
@ -7583,14 +7672,23 @@ static ssize_t
|
|||
tracing_read_dyn_info(struct file *filp, char __user *ubuf,
|
||||
size_t cnt, loff_t *ppos)
|
||||
{
|
||||
unsigned long *p = filp->private_data;
|
||||
char buf[64]; /* Not too big for a shallow stack */
|
||||
ssize_t ret;
|
||||
char *buf;
|
||||
int r;
|
||||
|
||||
r = scnprintf(buf, 63, "%ld", *p);
|
||||
buf[r++] = '\n';
|
||||
/* 256 should be plenty to hold the amount needed */
|
||||
buf = kmalloc(256, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
|
||||
r = scnprintf(buf, 256, "%ld pages:%ld groups: %ld\n",
|
||||
ftrace_update_tot_cnt,
|
||||
ftrace_number_of_pages,
|
||||
ftrace_number_of_groups);
|
||||
|
||||
ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
|
||||
kfree(buf);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct file_operations tracing_dyn_info_fops = {
|
||||
|
@ -8351,24 +8449,15 @@ static void update_tracer_options(struct trace_array *tr)
|
|||
mutex_unlock(&trace_types_lock);
|
||||
}
|
||||
|
||||
struct trace_array *trace_array_create(const char *name)
|
||||
static struct trace_array *trace_array_create(const char *name)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
|
||||
ret = -EEXIST;
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr->name && strcmp(tr->name, name) == 0)
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
tr = kzalloc(sizeof(*tr), GFP_KERNEL);
|
||||
if (!tr)
|
||||
goto out_unlock;
|
||||
return ERR_PTR(ret);
|
||||
|
||||
tr->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!tr->name)
|
||||
|
@ -8413,8 +8502,8 @@ struct trace_array *trace_array_create(const char *name)
|
|||
|
||||
list_add(&tr->list, &ftrace_trace_arrays);
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
tr->ref++;
|
||||
|
||||
|
||||
return tr;
|
||||
|
||||
|
@ -8424,24 +8513,77 @@ struct trace_array *trace_array_create(const char *name)
|
|||
kfree(tr->name);
|
||||
kfree(tr);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_create);
|
||||
|
||||
static int instance_mkdir(const char *name)
|
||||
{
|
||||
return PTR_ERR_OR_ZERO(trace_array_create(name));
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
|
||||
ret = -EEXIST;
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr->name && strcmp(tr->name, name) == 0)
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
tr = trace_array_create(name);
|
||||
|
||||
ret = PTR_ERR_OR_ZERO(tr);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* trace_array_get_by_name - Create/Lookup a trace array, given its name.
|
||||
* @name: The name of the trace array to be looked up/created.
|
||||
*
|
||||
* Returns pointer to trace array with given name.
|
||||
* NULL, if it cannot be created.
|
||||
*
|
||||
* NOTE: This function increments the reference counter associated with the
|
||||
* trace array returned. This makes sure it cannot be freed while in use.
|
||||
* Use trace_array_put() once the trace array is no longer needed.
|
||||
*
|
||||
*/
|
||||
struct trace_array *trace_array_get_by_name(const char *name)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr->name && strcmp(tr->name, name) == 0)
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
tr = trace_array_create(name);
|
||||
|
||||
if (IS_ERR(tr))
|
||||
tr = NULL;
|
||||
out_unlock:
|
||||
if (tr)
|
||||
tr->ref++;
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
return tr;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_get_by_name);
|
||||
|
||||
static int __remove_instance(struct trace_array *tr)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (tr->ref || (tr->current_trace && tr->current_trace->ref))
|
||||
/* Reference counter for a newly created trace array = 1. */
|
||||
if (tr->ref > 1 || (tr->current_trace && tr->current_trace->ref))
|
||||
return -EBUSY;
|
||||
|
||||
list_del(&tr->list);
|
||||
|
@ -8473,17 +8615,26 @@ static int __remove_instance(struct trace_array *tr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int trace_array_destroy(struct trace_array *tr)
|
||||
int trace_array_destroy(struct trace_array *this_tr)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
if (!tr)
|
||||
if (!this_tr)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
|
||||
ret = -ENODEV;
|
||||
|
||||
/* Making sure trace array exists before destroying it. */
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr == this_tr) {
|
||||
ret = __remove_instance(tr);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
@ -8585,8 +8736,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
|
|||
create_trace_options_dir(tr);
|
||||
|
||||
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
|
||||
trace_create_file("tracing_max_latency", 0644, d_tracer,
|
||||
&tr->max_latency, &tracing_max_lat_fops);
|
||||
trace_create_maxlat_file(tr, d_tracer);
|
||||
#endif
|
||||
|
||||
if (ftrace_create_function_files(tr, d_tracer))
|
||||
|
@ -8782,7 +8932,7 @@ static __init int tracer_init_tracefs(void)
|
|||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
trace_create_file("dyn_ftrace_total_info", 0444, d_tracer,
|
||||
&ftrace_update_tot_cnt, &tracing_dyn_info_fops);
|
||||
NULL, &tracing_dyn_info_fops);
|
||||
#endif
|
||||
|
||||
create_trace_instances(d_tracer);
|
||||
|
|
|
@ -11,11 +11,14 @@
|
|||
#include <linux/mmiotrace.h>
|
||||
#include <linux/tracepoint.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/trace.h>
|
||||
#include <linux/hw_breakpoint.h>
|
||||
#include <linux/trace_seq.h>
|
||||
#include <linux/trace_events.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/glob.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#ifdef CONFIG_FTRACE_SYSCALLS
|
||||
#include <asm/unistd.h> /* For NR_SYSCALLS */
|
||||
|
@ -264,6 +267,11 @@ struct trace_array {
|
|||
#endif
|
||||
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
|
||||
unsigned long max_latency;
|
||||
#ifdef CONFIG_FSNOTIFY
|
||||
struct dentry *d_max_latency;
|
||||
struct work_struct fsnotify_work;
|
||||
struct irq_work fsnotify_irqwork;
|
||||
#endif
|
||||
#endif
|
||||
struct trace_pid_list __rcu *filtered_pids;
|
||||
/*
|
||||
|
@ -337,7 +345,6 @@ extern struct list_head ftrace_trace_arrays;
|
|||
extern struct mutex trace_types_lock;
|
||||
|
||||
extern int trace_array_get(struct trace_array *tr);
|
||||
extern void trace_array_put(struct trace_array *tr);
|
||||
extern int tracing_check_open_get_tr(struct trace_array *tr);
|
||||
|
||||
extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
|
||||
|
@ -786,6 +793,17 @@ void update_max_tr_single(struct trace_array *tr,
|
|||
struct task_struct *tsk, int cpu);
|
||||
#endif /* CONFIG_TRACER_MAX_TRACE */
|
||||
|
||||
#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \
|
||||
defined(CONFIG_FSNOTIFY)
|
||||
|
||||
void latency_fsnotify(struct trace_array *tr);
|
||||
|
||||
#else
|
||||
|
||||
static inline void latency_fsnotify(struct trace_array *tr) { }
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_STACKTRACE
|
||||
void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
|
||||
int pc);
|
||||
|
@ -804,6 +822,8 @@ extern void trace_event_follow_fork(struct trace_array *tr, bool enable);
|
|||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
extern unsigned long ftrace_update_tot_cnt;
|
||||
extern unsigned long ftrace_number_of_pages;
|
||||
extern unsigned long ftrace_number_of_groups;
|
||||
void ftrace_init_trace_array(struct trace_array *tr);
|
||||
#else
|
||||
static inline void ftrace_init_trace_array(struct trace_array *tr) { }
|
||||
|
@ -853,8 +873,6 @@ trace_vprintk(unsigned long ip, const char *fmt, va_list args);
|
|||
extern int
|
||||
trace_array_vprintk(struct trace_array *tr,
|
||||
unsigned long ip, const char *fmt, va_list args);
|
||||
int trace_array_printk(struct trace_array *tr,
|
||||
unsigned long ip, const char *fmt, ...);
|
||||
int trace_array_printk_buf(struct ring_buffer *buffer,
|
||||
unsigned long ip, const char *fmt, ...);
|
||||
void trace_printk_seq(struct trace_seq *s);
|
||||
|
@ -1870,7 +1888,6 @@ extern const char *__start___tracepoint_str[];
|
|||
extern const char *__stop___tracepoint_str[];
|
||||
|
||||
void trace_printk_control(bool enabled);
|
||||
void trace_printk_init_buffers(void);
|
||||
void trace_printk_start_comm(void);
|
||||
int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
|
||||
int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled);
|
||||
|
|
|
@ -244,7 +244,7 @@ static int annotated_branch_stat_headers(struct seq_file *m)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline long get_incorrect_percent(struct ftrace_branch_data *p)
|
||||
static inline long get_incorrect_percent(const struct ftrace_branch_data *p)
|
||||
{
|
||||
long percent;
|
||||
|
||||
|
@ -332,10 +332,10 @@ annotated_branch_stat_next(void *v, int idx)
|
|||
return p;
|
||||
}
|
||||
|
||||
static int annotated_branch_stat_cmp(void *p1, void *p2)
|
||||
static int annotated_branch_stat_cmp(const void *p1, const void *p2)
|
||||
{
|
||||
struct ftrace_branch_data *a = p1;
|
||||
struct ftrace_branch_data *b = p2;
|
||||
const struct ftrace_branch_data *a = p1;
|
||||
const struct ftrace_branch_data *b = p2;
|
||||
|
||||
long percent_a, percent_b;
|
||||
|
||||
|
|
|
@ -793,6 +793,8 @@ int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
|
|||
char *event = NULL, *sub = NULL, *match;
|
||||
int ret;
|
||||
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
/*
|
||||
* The buf format can be <subsystem>:<event-name>
|
||||
* *:<event-name> means any event by that name.
|
||||
|
@ -825,7 +827,6 @@ int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ftrace_set_clr_event);
|
||||
|
||||
/**
|
||||
* trace_set_clr_event - enable or disable an event
|
||||
|
@ -850,6 +851,32 @@ int trace_set_clr_event(const char *system, const char *event, int set)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(trace_set_clr_event);
|
||||
|
||||
/**
|
||||
* trace_array_set_clr_event - enable or disable an event for a trace array.
|
||||
* @tr: concerned trace array.
|
||||
* @system: system name to match (NULL for any system)
|
||||
* @event: event name to match (NULL for all events, within system)
|
||||
* @enable: true to enable, false to disable
|
||||
*
|
||||
* This is a way for other parts of the kernel to enable or disable
|
||||
* event recording.
|
||||
*
|
||||
* Returns 0 on success, -EINVAL if the parameters do not match any
|
||||
* registered events.
|
||||
*/
|
||||
int trace_array_set_clr_event(struct trace_array *tr, const char *system,
|
||||
const char *event, bool enable)
|
||||
{
|
||||
int set;
|
||||
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
|
||||
set = (enable == true) ? 1 : 0;
|
||||
return __ftrace_set_clr_event(tr, NULL, system, event, set);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_set_clr_event);
|
||||
|
||||
/* 128 should be much more than enough */
|
||||
#define EVENT_BUF_SIZE 127
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@
|
|||
#include "trace_dynevent.h"
|
||||
|
||||
#define SYNTH_SYSTEM "synthetic"
|
||||
#define SYNTH_FIELDS_MAX 16
|
||||
#define SYNTH_FIELDS_MAX 32
|
||||
|
||||
#define STR_VAR_LEN_MAX 32 /* must be multiple of sizeof(u64) */
|
||||
|
||||
|
|
|
@ -171,7 +171,7 @@ ftrace_define_fields_##name(struct trace_event_call *event_call) \
|
|||
#define FTRACE_ENTRY_REG(call, struct_name, etype, tstruct, print, filter,\
|
||||
regfn) \
|
||||
\
|
||||
struct trace_event_class __refdata event_class_ftrace_##call = { \
|
||||
static struct trace_event_class __refdata event_class_ftrace_##call = { \
|
||||
.system = __stringify(TRACE_SYSTEM), \
|
||||
.define_fields = ftrace_define_fields_##call, \
|
||||
.fields = LIST_HEAD_INIT(event_class_ftrace_##call.fields),\
|
||||
|
@ -187,7 +187,7 @@ struct trace_event_call __used event_##call = { \
|
|||
.print_fmt = print, \
|
||||
.flags = TRACE_EVENT_FL_IGNORE_ENABLE, \
|
||||
}; \
|
||||
struct trace_event_call __used \
|
||||
static struct trace_event_call __used \
|
||||
__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call;
|
||||
|
||||
#undef FTRACE_ENTRY
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* trace_hwlatdetect.c - A simple Hardware Latency detector.
|
||||
* trace_hwlat.c - A simple Hardware Latency detector.
|
||||
*
|
||||
* Use this tracer to detect large system latencies induced by the behavior of
|
||||
* certain underlying system hardware or firmware, independent of Linux itself.
|
||||
|
@ -237,6 +237,7 @@ static int get_sample(void)
|
|||
/* If we exceed the threshold value, we have found a hardware latency */
|
||||
if (sample > thresh || outer_sample > thresh) {
|
||||
struct hwlat_sample s;
|
||||
u64 latency;
|
||||
|
||||
ret = 1;
|
||||
|
||||
|
@ -253,11 +254,13 @@ static int get_sample(void)
|
|||
s.nmi_count = nmi_count;
|
||||
trace_hwlat_sample(&s);
|
||||
|
||||
latency = max(sample, outer_sample);
|
||||
|
||||
/* Keep a running maximum ever recorded hardware latency */
|
||||
if (sample > tr->max_latency)
|
||||
tr->max_latency = sample;
|
||||
if (outer_sample > tr->max_latency)
|
||||
tr->max_latency = outer_sample;
|
||||
if (latency > tr->max_latency) {
|
||||
tr->max_latency = latency;
|
||||
latency_fsnotify(tr);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
|
@ -276,7 +279,7 @@ static void move_to_next_cpu(void)
|
|||
return;
|
||||
/*
|
||||
* If for some reason the user modifies the CPU affinity
|
||||
* of this thread, than stop migrating for the duration
|
||||
* of this thread, then stop migrating for the duration
|
||||
* of the current test.
|
||||
*/
|
||||
if (!cpumask_equal(current_mask, current->cpus_ptr))
|
||||
|
|
|
@ -435,11 +435,10 @@ static int disable_trace_kprobe(struct trace_event_call *call,
|
|||
|
||||
#if defined(CONFIG_KPROBES_ON_FTRACE) && \
|
||||
!defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE)
|
||||
static bool within_notrace_func(struct trace_kprobe *tk)
|
||||
static bool __within_notrace_func(unsigned long addr)
|
||||
{
|
||||
unsigned long offset, size, addr;
|
||||
unsigned long offset, size;
|
||||
|
||||
addr = trace_kprobe_address(tk);
|
||||
if (!addr || !kallsyms_lookup_size_offset(addr, &size, &offset))
|
||||
return false;
|
||||
|
||||
|
@ -452,6 +451,28 @@ static bool within_notrace_func(struct trace_kprobe *tk)
|
|||
*/
|
||||
return !ftrace_location_range(addr, addr + size - 1);
|
||||
}
|
||||
|
||||
static bool within_notrace_func(struct trace_kprobe *tk)
|
||||
{
|
||||
unsigned long addr = addr = trace_kprobe_address(tk);
|
||||
char symname[KSYM_NAME_LEN], *p;
|
||||
|
||||
if (!__within_notrace_func(addr))
|
||||
return false;
|
||||
|
||||
/* Check if the address is on a suffixed-symbol */
|
||||
if (!lookup_symbol_name(addr, symname)) {
|
||||
p = strchr(symname, '.');
|
||||
if (!p)
|
||||
return true;
|
||||
*p = '\0';
|
||||
addr = (unsigned long)kprobe_lookup_name(symname, 0);
|
||||
if (addr)
|
||||
return __within_notrace_func(addr);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
#else
|
||||
#define within_notrace_func(tk) (false)
|
||||
#endif
|
||||
|
|
|
@ -274,6 +274,21 @@ trace_print_array_seq(struct trace_seq *p, const void *buf, int count,
|
|||
}
|
||||
EXPORT_SYMBOL(trace_print_array_seq);
|
||||
|
||||
const char *
|
||||
trace_print_hex_dump_seq(struct trace_seq *p, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii)
|
||||
{
|
||||
const char *ret = trace_seq_buffer_ptr(p);
|
||||
|
||||
trace_seq_putc(p, '\n');
|
||||
trace_seq_hex_dump(p, prefix_str, prefix_type,
|
||||
rowsize, groupsize, buf, len, ascii);
|
||||
trace_seq_putc(p, 0);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(trace_print_hex_dump_seq);
|
||||
|
||||
int trace_raw_output_prep(struct trace_iterator *iter,
|
||||
struct trace_event *trace_event)
|
||||
{
|
||||
|
|
|
@ -376,3 +376,33 @@ int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, int cnt)
|
|||
return seq_buf_to_user(&s->seq, ubuf, cnt);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_seq_to_user);
|
||||
|
||||
int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
|
||||
int prefix_type, int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii)
|
||||
{
|
||||
unsigned int save_len = s->seq.len;
|
||||
|
||||
if (s->full)
|
||||
return 0;
|
||||
|
||||
__trace_seq_init(s);
|
||||
|
||||
if (TRACE_SEQ_BUF_LEFT(s) < 1) {
|
||||
s->full = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
seq_buf_hex_dump(&(s->seq), prefix_str,
|
||||
prefix_type, rowsize, groupsize,
|
||||
buf, len, ascii);
|
||||
|
||||
if (unlikely(seq_buf_has_overflowed(&s->seq))) {
|
||||
s->seq.len = save_len;
|
||||
s->full = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
EXPORT_SYMBOL(trace_seq_hex_dump);
|
||||
|
|
|
@ -72,9 +72,7 @@ static void destroy_session(struct stat_session *session)
|
|||
kfree(session);
|
||||
}
|
||||
|
||||
typedef int (*cmp_stat_t)(void *, void *);
|
||||
|
||||
static int insert_stat(struct rb_root *root, void *stat, cmp_stat_t cmp)
|
||||
static int insert_stat(struct rb_root *root, void *stat, cmp_func_t cmp)
|
||||
{
|
||||
struct rb_node **new = &(root->rb_node), *parent = NULL;
|
||||
struct stat_node *data;
|
||||
|
@ -112,7 +110,7 @@ static int insert_stat(struct rb_root *root, void *stat, cmp_stat_t cmp)
|
|||
* This one will force an insertion as right-most node
|
||||
* in the rbtree.
|
||||
*/
|
||||
static int dummy_cmp(void *p1, void *p2)
|
||||
static int dummy_cmp(const void *p1, const void *p2)
|
||||
{
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -16,7 +16,7 @@ struct tracer_stat {
|
|||
void *(*stat_start)(struct tracer_stat *trace);
|
||||
void *(*stat_next)(void *prev, int idx);
|
||||
/* Compare two entries for stats sorting */
|
||||
int (*stat_cmp)(void *p1, void *p2);
|
||||
cmp_func_t stat_cmp;
|
||||
/* Print a stat entry */
|
||||
int (*stat_show)(struct seq_file *s, void *p);
|
||||
/* Release an entry */
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#include <linux/module.h> /* for MODULE_NAME_LEN via KSYM_SYMBOL_LEN */
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/xarray.h>
|
||||
#include <asm/syscall.h>
|
||||
|
||||
#include "trace_output.h"
|
||||
|
@ -30,6 +31,7 @@ syscall_get_enter_fields(struct trace_event_call *call)
|
|||
extern struct syscall_metadata *__start_syscalls_metadata[];
|
||||
extern struct syscall_metadata *__stop_syscalls_metadata[];
|
||||
|
||||
static DEFINE_XARRAY(syscalls_metadata_sparse);
|
||||
static struct syscall_metadata **syscalls_metadata;
|
||||
|
||||
#ifndef ARCH_HAS_SYSCALL_MATCH_SYM_NAME
|
||||
|
@ -101,6 +103,9 @@ find_syscall_meta(unsigned long syscall)
|
|||
|
||||
static struct syscall_metadata *syscall_nr_to_meta(int nr)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR))
|
||||
return xa_load(&syscalls_metadata_sparse, (unsigned long)nr);
|
||||
|
||||
if (!syscalls_metadata || nr >= NR_syscalls || nr < 0)
|
||||
return NULL;
|
||||
|
||||
|
@ -536,13 +541,17 @@ void __init init_ftrace_syscalls(void)
|
|||
struct syscall_metadata *meta;
|
||||
unsigned long addr;
|
||||
int i;
|
||||
void *ret;
|
||||
|
||||
syscalls_metadata = kcalloc(NR_syscalls, sizeof(*syscalls_metadata),
|
||||
if (!IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR)) {
|
||||
syscalls_metadata = kcalloc(NR_syscalls,
|
||||
sizeof(*syscalls_metadata),
|
||||
GFP_KERNEL);
|
||||
if (!syscalls_metadata) {
|
||||
WARN_ON(1);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < NR_syscalls; i++) {
|
||||
addr = arch_syscall_addr(i);
|
||||
|
@ -551,7 +560,16 @@ void __init init_ftrace_syscalls(void)
|
|||
continue;
|
||||
|
||||
meta->syscall_nr = i;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_HAVE_SPARSE_SYSCALL_NR)) {
|
||||
syscalls_metadata[i] = meta;
|
||||
} else {
|
||||
ret = xa_store(&syscalls_metadata_sparse, i, meta,
|
||||
GFP_KERNEL);
|
||||
WARN(xa_is_err(ret),
|
||||
"Syscall memory allocation failed\n");
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
* the same comparison function for both sort() and bsearch().
|
||||
*/
|
||||
void *bsearch(const void *key, const void *base, size_t num, size_t size,
|
||||
int (*cmp)(const void *key, const void *elt))
|
||||
cmp_func_t cmp)
|
||||
{
|
||||
const char *pivot;
|
||||
int result;
|
||||
|
|
|
@ -328,3 +328,65 @@ int seq_buf_to_user(struct seq_buf *s, char __user *ubuf, int cnt)
|
|||
s->readpos += cnt;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
/**
|
||||
* seq_buf_hex_dump - print formatted hex dump into the sequence buffer
|
||||
* @s: seq_buf descriptor
|
||||
* @prefix_str: string to prefix each line with;
|
||||
* caller supplies trailing spaces for alignment if desired
|
||||
* @prefix_type: controls whether prefix of an offset, address, or none
|
||||
* is printed (%DUMP_PREFIX_OFFSET, %DUMP_PREFIX_ADDRESS, %DUMP_PREFIX_NONE)
|
||||
* @rowsize: number of bytes to print per line; must be 16 or 32
|
||||
* @groupsize: number of bytes to print at a time (1, 2, 4, 8; default = 1)
|
||||
* @buf: data blob to dump
|
||||
* @len: number of bytes in the @buf
|
||||
* @ascii: include ASCII after the hex output
|
||||
*
|
||||
* Function is an analogue of print_hex_dump() and thus has similar interface.
|
||||
*
|
||||
* linebuf size is maximal length for one line.
|
||||
* 32 * 3 - maximum bytes per line, each printed into 2 chars + 1 for
|
||||
* separating space
|
||||
* 2 - spaces separating hex dump and ascii representation
|
||||
* 32 - ascii representation
|
||||
* 1 - terminating '\0'
|
||||
*
|
||||
* Returns zero on success, -1 on overflow
|
||||
*/
|
||||
int seq_buf_hex_dump(struct seq_buf *s, const char *prefix_str, int prefix_type,
|
||||
int rowsize, int groupsize,
|
||||
const void *buf, size_t len, bool ascii)
|
||||
{
|
||||
const u8 *ptr = buf;
|
||||
int i, linelen, remaining = len;
|
||||
unsigned char linebuf[32 * 3 + 2 + 32 + 1];
|
||||
int ret;
|
||||
|
||||
if (rowsize != 16 && rowsize != 32)
|
||||
rowsize = 16;
|
||||
|
||||
for (i = 0; i < len; i += rowsize) {
|
||||
linelen = min(remaining, rowsize);
|
||||
remaining -= rowsize;
|
||||
|
||||
hex_dump_to_buffer(ptr + i, linelen, rowsize, groupsize,
|
||||
linebuf, sizeof(linebuf), ascii);
|
||||
|
||||
switch (prefix_type) {
|
||||
case DUMP_PREFIX_ADDRESS:
|
||||
ret = seq_buf_printf(s, "%s%p: %s\n",
|
||||
prefix_str, ptr + i, linebuf);
|
||||
break;
|
||||
case DUMP_PREFIX_OFFSET:
|
||||
ret = seq_buf_printf(s, "%s%.8x: %s\n",
|
||||
prefix_str, i, linebuf);
|
||||
break;
|
||||
default:
|
||||
ret = seq_buf_printf(s, "%s%s\n", prefix_str, linebuf);
|
||||
break;
|
||||
}
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
15
lib/sort.c
15
lib/sort.c
|
@ -117,8 +117,6 @@ static void swap_bytes(void *a, void *b, size_t n)
|
|||
} while (n);
|
||||
}
|
||||
|
||||
typedef void (*swap_func_t)(void *a, void *b, int size);
|
||||
|
||||
/*
|
||||
* The values are arbitrary as long as they can't be confused with
|
||||
* a pointer, but small integers make for the smallest compare
|
||||
|
@ -144,12 +142,9 @@ static void do_swap(void *a, void *b, size_t size, swap_func_t swap_func)
|
|||
swap_func(a, b, (int)size);
|
||||
}
|
||||
|
||||
typedef int (*cmp_func_t)(const void *, const void *);
|
||||
typedef int (*cmp_r_func_t)(const void *, const void *, const void *);
|
||||
#define _CMP_WRAPPER ((cmp_r_func_t)0L)
|
||||
|
||||
static int do_cmp(const void *a, const void *b,
|
||||
cmp_r_func_t cmp, const void *priv)
|
||||
static int do_cmp(const void *a, const void *b, cmp_r_func_t cmp, const void *priv)
|
||||
{
|
||||
if (cmp == _CMP_WRAPPER)
|
||||
return ((cmp_func_t)(priv))(a, b);
|
||||
|
@ -202,8 +197,8 @@ static size_t parent(size_t i, unsigned int lsbit, size_t size)
|
|||
* it less suitable for kernel use.
|
||||
*/
|
||||
void sort_r(void *base, size_t num, size_t size,
|
||||
int (*cmp_func)(const void *, const void *, const void *),
|
||||
void (*swap_func)(void *, void *, int size),
|
||||
cmp_r_func_t cmp_func,
|
||||
swap_func_t swap_func,
|
||||
const void *priv)
|
||||
{
|
||||
/* pre-scale counters for performance */
|
||||
|
@ -269,8 +264,8 @@ void sort_r(void *base, size_t num, size_t size,
|
|||
EXPORT_SYMBOL(sort_r);
|
||||
|
||||
void sort(void *base, size_t num, size_t size,
|
||||
int (*cmp_func)(const void *, const void *),
|
||||
void (*swap_func)(void *, void *, int size))
|
||||
cmp_func_t cmp_func,
|
||||
swap_func_t swap_func)
|
||||
{
|
||||
return sort_r(base, num, size, _CMP_WRAPPER, swap_func, cmp_func);
|
||||
}
|
||||
|
|
|
@ -19,6 +19,21 @@ config SAMPLE_TRACE_PRINTK
|
|||
This builds a module that calls trace_printk() and can be used to
|
||||
test various trace_printk() calls from a module.
|
||||
|
||||
config SAMPLE_FTRACE_DIRECT
|
||||
tristate "Build register_ftrace_direct() example"
|
||||
depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS && m
|
||||
depends on X86_64 # has x86_64 inlined asm
|
||||
help
|
||||
This builds an ftrace direct function example
|
||||
that hooks to wake_up_process and prints the parameters.
|
||||
|
||||
config SAMPLE_TRACE_ARRAY
|
||||
tristate "Build sample module for kernel access to Ftrace instancess"
|
||||
depends on EVENT_TRACING && m
|
||||
help
|
||||
This builds a module that demonstrates the use of various APIs to
|
||||
access Ftrace instances from within the kernel.
|
||||
|
||||
config SAMPLE_KOBJECT
|
||||
tristate "Build kobject examples"
|
||||
help
|
||||
|
|
|
@ -17,6 +17,8 @@ obj-$(CONFIG_SAMPLE_RPMSG_CLIENT) += rpmsg/
|
|||
subdir-$(CONFIG_SAMPLE_SECCOMP) += seccomp
|
||||
obj-$(CONFIG_SAMPLE_TRACE_EVENTS) += trace_events/
|
||||
obj-$(CONFIG_SAMPLE_TRACE_PRINTK) += trace_printk/
|
||||
obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace/
|
||||
obj-$(CONFIG_SAMPLE_TRACE_ARRAY) += ftrace/
|
||||
obj-$(CONFIG_VIDEO_PCI_SKELETON) += v4l/
|
||||
obj-y += vfio-mdev/
|
||||
subdir-$(CONFIG_SAMPLE_VFS) += vfs
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct.o
|
||||
obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-too.o
|
||||
obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-modify.o
|
||||
|
||||
CFLAGS_sample-trace-array.o := -I$(src)
|
||||
obj-$(CONFIG_SAMPLE_TRACE_ARRAY) += sample-trace-array.o
|
|
@ -0,0 +1,88 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
#include <linux/module.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/ftrace.h>
|
||||
|
||||
void my_direct_func1(void)
|
||||
{
|
||||
trace_printk("my direct func1\n");
|
||||
}
|
||||
|
||||
void my_direct_func2(void)
|
||||
{
|
||||
trace_printk("my direct func2\n");
|
||||
}
|
||||
|
||||
extern void my_tramp1(void *);
|
||||
extern void my_tramp2(void *);
|
||||
|
||||
static unsigned long my_ip = (unsigned long)schedule;
|
||||
|
||||
asm (
|
||||
" .pushsection .text, \"ax\", @progbits\n"
|
||||
" my_tramp1:"
|
||||
" pushq %rbp\n"
|
||||
" movq %rsp, %rbp\n"
|
||||
" call my_direct_func1\n"
|
||||
" leave\n"
|
||||
" ret\n"
|
||||
" my_tramp2:"
|
||||
" pushq %rbp\n"
|
||||
" movq %rsp, %rbp\n"
|
||||
" call my_direct_func2\n"
|
||||
" leave\n"
|
||||
" ret\n"
|
||||
" .popsection\n"
|
||||
);
|
||||
|
||||
static unsigned long my_tramp = (unsigned long)my_tramp1;
|
||||
static unsigned long tramps[2] = {
|
||||
(unsigned long)my_tramp1,
|
||||
(unsigned long)my_tramp2,
|
||||
};
|
||||
|
||||
static int simple_thread(void *arg)
|
||||
{
|
||||
static int t;
|
||||
int ret = 0;
|
||||
|
||||
while (!kthread_should_stop()) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
schedule_timeout(2 * HZ);
|
||||
|
||||
if (ret)
|
||||
continue;
|
||||
t ^= 1;
|
||||
ret = modify_ftrace_direct(my_ip, my_tramp, tramps[t]);
|
||||
if (!ret)
|
||||
my_tramp = tramps[t];
|
||||
WARN_ON_ONCE(ret);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct task_struct *simple_tsk;
|
||||
|
||||
static int __init ftrace_direct_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = register_ftrace_direct(my_ip, my_tramp);
|
||||
if (!ret)
|
||||
simple_tsk = kthread_run(simple_thread, NULL, "event-sample-fn");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit ftrace_direct_exit(void)
|
||||
{
|
||||
kthread_stop(simple_tsk);
|
||||
unregister_ftrace_direct(my_ip, my_tramp);
|
||||
}
|
||||
|
||||
module_init(ftrace_direct_init);
|
||||
module_exit(ftrace_direct_exit);
|
||||
|
||||
MODULE_AUTHOR("Steven Rostedt");
|
||||
MODULE_DESCRIPTION("Example use case of using modify_ftrace_direct()");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -0,0 +1,51 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <linux/mm.h> /* for handle_mm_fault() */
|
||||
#include <linux/ftrace.h>
|
||||
|
||||
void my_direct_func(struct vm_area_struct *vma,
|
||||
unsigned long address, unsigned int flags)
|
||||
{
|
||||
trace_printk("handle mm fault vma=%p address=%lx flags=%x\n",
|
||||
vma, address, flags);
|
||||
}
|
||||
|
||||
extern void my_tramp(void *);
|
||||
|
||||
asm (
|
||||
" .pushsection .text, \"ax\", @progbits\n"
|
||||
" my_tramp:"
|
||||
" pushq %rbp\n"
|
||||
" movq %rsp, %rbp\n"
|
||||
" pushq %rdi\n"
|
||||
" pushq %rsi\n"
|
||||
" pushq %rdx\n"
|
||||
" call my_direct_func\n"
|
||||
" popq %rdx\n"
|
||||
" popq %rsi\n"
|
||||
" popq %rdi\n"
|
||||
" leave\n"
|
||||
" ret\n"
|
||||
" .popsection\n"
|
||||
);
|
||||
|
||||
|
||||
static int __init ftrace_direct_init(void)
|
||||
{
|
||||
return register_ftrace_direct((unsigned long)handle_mm_fault,
|
||||
(unsigned long)my_tramp);
|
||||
}
|
||||
|
||||
static void __exit ftrace_direct_exit(void)
|
||||
{
|
||||
unregister_ftrace_direct((unsigned long)handle_mm_fault,
|
||||
(unsigned long)my_tramp);
|
||||
}
|
||||
|
||||
module_init(ftrace_direct_init);
|
||||
module_exit(ftrace_direct_exit);
|
||||
|
||||
MODULE_AUTHOR("Steven Rostedt");
|
||||
MODULE_DESCRIPTION("Another example use case of using register_ftrace_direct()");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -0,0 +1,45 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <linux/sched.h> /* for wake_up_process() */
|
||||
#include <linux/ftrace.h>
|
||||
|
||||
void my_direct_func(struct task_struct *p)
|
||||
{
|
||||
trace_printk("waking up %s-%d\n", p->comm, p->pid);
|
||||
}
|
||||
|
||||
extern void my_tramp(void *);
|
||||
|
||||
asm (
|
||||
" .pushsection .text, \"ax\", @progbits\n"
|
||||
" my_tramp:"
|
||||
" pushq %rbp\n"
|
||||
" movq %rsp, %rbp\n"
|
||||
" pushq %rdi\n"
|
||||
" call my_direct_func\n"
|
||||
" popq %rdi\n"
|
||||
" leave\n"
|
||||
" ret\n"
|
||||
" .popsection\n"
|
||||
);
|
||||
|
||||
|
||||
static int __init ftrace_direct_init(void)
|
||||
{
|
||||
return register_ftrace_direct((unsigned long)wake_up_process,
|
||||
(unsigned long)my_tramp);
|
||||
}
|
||||
|
||||
static void __exit ftrace_direct_exit(void)
|
||||
{
|
||||
unregister_ftrace_direct((unsigned long)wake_up_process,
|
||||
(unsigned long)my_tramp);
|
||||
}
|
||||
|
||||
module_init(ftrace_direct_init);
|
||||
module_exit(ftrace_direct_exit);
|
||||
|
||||
MODULE_AUTHOR("Steven Rostedt");
|
||||
MODULE_DESCRIPTION("Example use case of using register_ftrace_direct()");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -0,0 +1,131 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
#include <linux/module.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/trace.h>
|
||||
#include <linux/trace_events.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/jiffies.h>
|
||||
|
||||
/*
|
||||
* Any file that uses trace points, must include the header.
|
||||
* But only one file, must include the header by defining
|
||||
* CREATE_TRACE_POINTS first. This will make the C code that
|
||||
* creates the handles for the trace points.
|
||||
*/
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include "sample-trace-array.h"
|
||||
|
||||
struct trace_array *tr;
|
||||
static void mytimer_handler(struct timer_list *unused);
|
||||
static struct task_struct *simple_tsk;
|
||||
|
||||
/*
|
||||
* mytimer: Timer setup to disable tracing for event "sample_event". This
|
||||
* timer is only for the purposes of the sample module to demonstrate access of
|
||||
* Ftrace instances from within kernel.
|
||||
*/
|
||||
static DEFINE_TIMER(mytimer, mytimer_handler);
|
||||
|
||||
static void mytimer_handler(struct timer_list *unused)
|
||||
{
|
||||
/*
|
||||
* Disable tracing for event "sample_event".
|
||||
*/
|
||||
trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
|
||||
false);
|
||||
}
|
||||
|
||||
static void simple_thread_func(int count)
|
||||
{
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
schedule_timeout(HZ);
|
||||
|
||||
/*
|
||||
* Printing count value using trace_array_printk() - trace_printk()
|
||||
* equivalent for the instance buffers.
|
||||
*/
|
||||
trace_array_printk(tr, _THIS_IP_, "trace_array_printk: count=%d\n",
|
||||
count);
|
||||
/*
|
||||
* Tracepoint for event "sample_event". This will print the
|
||||
* current value of count and current jiffies.
|
||||
*/
|
||||
trace_sample_event(count, jiffies);
|
||||
}
|
||||
|
||||
static int simple_thread(void *arg)
|
||||
{
|
||||
int count = 0;
|
||||
unsigned long delay = msecs_to_jiffies(5000);
|
||||
|
||||
/*
|
||||
* Enable tracing for "sample_event".
|
||||
*/
|
||||
trace_array_set_clr_event(tr, "sample-subsystem", "sample_event", true);
|
||||
|
||||
/*
|
||||
* Adding timer - mytimer. This timer will disable tracing after
|
||||
* delay seconds.
|
||||
*
|
||||
*/
|
||||
add_timer(&mytimer);
|
||||
mod_timer(&mytimer, jiffies+delay);
|
||||
|
||||
while (!kthread_should_stop())
|
||||
simple_thread_func(count++);
|
||||
|
||||
del_timer(&mytimer);
|
||||
|
||||
/*
|
||||
* trace_array_put() decrements the reference counter associated with
|
||||
* the trace array - "tr". We are done using the trace array, hence
|
||||
* decrement the reference counter so that it can be destroyed using
|
||||
* trace_array_destroy().
|
||||
*/
|
||||
trace_array_put(tr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init sample_trace_array_init(void)
|
||||
{
|
||||
/*
|
||||
* Return a pointer to the trace array with name "sample-instance" if it
|
||||
* exists, else create a new trace array.
|
||||
*
|
||||
* NOTE: This function increments the reference counter
|
||||
* associated with the trace array - "tr".
|
||||
*/
|
||||
tr = trace_array_get_by_name("sample-instance");
|
||||
|
||||
if (!tr)
|
||||
return -1;
|
||||
/*
|
||||
* If context specific per-cpu buffers havent already been allocated.
|
||||
*/
|
||||
trace_printk_init_buffers();
|
||||
|
||||
simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");
|
||||
if (IS_ERR(simple_tsk))
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit sample_trace_array_exit(void)
|
||||
{
|
||||
kthread_stop(simple_tsk);
|
||||
|
||||
/*
|
||||
* We are unloading our module and no longer require the trace array.
|
||||
* Remove/destroy "tr" using trace_array_destroy()
|
||||
*/
|
||||
trace_array_destroy(tr);
|
||||
}
|
||||
|
||||
module_init(sample_trace_array_init);
|
||||
module_exit(sample_trace_array_exit);
|
||||
|
||||
MODULE_AUTHOR("Divya Indi");
|
||||
MODULE_DESCRIPTION("Sample module for kernel access to Ftrace instances");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -0,0 +1,84 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
|
||||
/*
|
||||
* If TRACE_SYSTEM is defined, that will be the directory created
|
||||
* in the ftrace directory under /sys/kernel/tracing/events/<system>
|
||||
*
|
||||
* The define_trace.h below will also look for a file name of
|
||||
* TRACE_SYSTEM.h where TRACE_SYSTEM is what is defined here.
|
||||
* In this case, it would look for sample-trace.h
|
||||
*
|
||||
* If the header name will be different than the system name
|
||||
* (as in this case), then you can override the header name that
|
||||
* define_trace.h will look up by defining TRACE_INCLUDE_FILE
|
||||
*
|
||||
* This file is called sample-trace-array.h but we want the system
|
||||
* to be called "sample-subsystem". Therefore we must define the name of this
|
||||
* file:
|
||||
*
|
||||
* #define TRACE_INCLUDE_FILE sample-trace-array
|
||||
*
|
||||
* As we do in the bottom of this file.
|
||||
*
|
||||
* Notice that TRACE_SYSTEM should be defined outside of #if
|
||||
* protection, just like TRACE_INCLUDE_FILE.
|
||||
*/
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM sample-subsystem
|
||||
|
||||
/*
|
||||
* TRACE_SYSTEM is expected to be a C valid variable (alpha-numeric
|
||||
* and underscore), although it may start with numbers. If for some
|
||||
* reason it is not, you need to add the following lines:
|
||||
*/
|
||||
#undef TRACE_SYSTEM_VAR
|
||||
#define TRACE_SYSTEM_VAR sample_subsystem
|
||||
|
||||
/*
|
||||
* But the above is only needed if TRACE_SYSTEM is not alpha-numeric
|
||||
* and underscored. By default, TRACE_SYSTEM_VAR will be equal to
|
||||
* TRACE_SYSTEM. As TRACE_SYSTEM_VAR must be alpha-numeric, if
|
||||
* TRACE_SYSTEM is not, then TRACE_SYSTEM_VAR must be defined with
|
||||
* only alpha-numeric and underscores.
|
||||
*
|
||||
* The TRACE_SYSTEM_VAR is only used internally and not visible to
|
||||
* user space.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Notice that this file is not protected like a normal header.
|
||||
* We also must allow for rereading of this file. The
|
||||
*
|
||||
* || defined(TRACE_HEADER_MULTI_READ)
|
||||
*
|
||||
* serves this purpose.
|
||||
*/
|
||||
#if !defined(_SAMPLE_TRACE_ARRAY_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _SAMPLE_TRACE_ARRAY_H
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
TRACE_EVENT(sample_event,
|
||||
|
||||
TP_PROTO(int count, unsigned long time),
|
||||
|
||||
TP_ARGS(count, time),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, count)
|
||||
__field(unsigned long, time)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->count = count;
|
||||
__entry->time = time;
|
||||
),
|
||||
|
||||
TP_printk("count value=%d at jiffies=%lu", __entry->count,
|
||||
__entry->time)
|
||||
);
|
||||
#endif
|
||||
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
#define TRACE_INCLUDE_PATH .
|
||||
#define TRACE_INCLUDE_FILE sample-trace-array
|
||||
#include <trace/define_trace.h>
|
|
@ -0,0 +1 @@
|
|||
timeout=0
|
|
@ -0,0 +1,69 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: Test ftrace direct functions against tracers
|
||||
|
||||
rmmod ftrace-direct ||:
|
||||
if ! modprobe ftrace-direct ; then
|
||||
echo "No ftrace-direct sample module - please make CONFIG_SAMPLE_FTRACE_DIRECT=m"
|
||||
exit_unresolved;
|
||||
fi
|
||||
|
||||
echo "Let the module run a little"
|
||||
sleep 1
|
||||
|
||||
grep -q "my_direct_func: waking up" trace
|
||||
|
||||
rmmod ftrace-direct
|
||||
|
||||
test_tracer() {
|
||||
tracer=$1
|
||||
|
||||
# tracer -> direct -> no direct > no tracer
|
||||
echo $tracer > current_tracer
|
||||
modprobe ftrace-direct
|
||||
rmmod ftrace-direct
|
||||
echo nop > current_tracer
|
||||
|
||||
# tracer -> direct -> no tracer > no direct
|
||||
echo $tracer > current_tracer
|
||||
modprobe ftrace-direct
|
||||
echo nop > current_tracer
|
||||
rmmod ftrace-direct
|
||||
|
||||
# direct -> tracer -> no tracer > no direct
|
||||
modprobe ftrace-direct
|
||||
echo $tracer > current_tracer
|
||||
echo nop > current_tracer
|
||||
rmmod ftrace-direct
|
||||
|
||||
# direct -> tracer -> no direct > no notracer
|
||||
modprobe ftrace-direct
|
||||
echo $tracer > current_tracer
|
||||
rmmod ftrace-direct
|
||||
echo nop > current_tracer
|
||||
}
|
||||
|
||||
for t in `cat available_tracers`; do
|
||||
if [ "$t" != "nop" ]; then
|
||||
test_tracer $t
|
||||
fi
|
||||
done
|
||||
|
||||
echo nop > current_tracer
|
||||
rmmod ftrace-direct ||:
|
||||
|
||||
# Now do the same thing with another direct function registered
|
||||
echo "Running with another ftrace direct function"
|
||||
|
||||
rmmod ftrace-direct-too ||:
|
||||
modprobe ftrace-direct-too
|
||||
|
||||
for t in `cat available_tracers`; do
|
||||
if [ "$t" != "nop" ]; then
|
||||
test_tracer $t
|
||||
fi
|
||||
done
|
||||
|
||||
echo nop > current_tracer
|
||||
rmmod ftrace-direct ||:
|
||||
rmmod ftrace-direct-too ||:
|
|
@ -0,0 +1,84 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: Test ftrace direct functions against kprobes
|
||||
|
||||
rmmod ftrace-direct ||:
|
||||
if ! modprobe ftrace-direct ; then
|
||||
echo "No ftrace-direct sample module - please build with CONFIG_SAMPLE_FTRACE_DIRECT=m"
|
||||
exit_unresolved;
|
||||
fi
|
||||
|
||||
if [ ! -f kprobe_events ]; then
|
||||
echo "No kprobe_events file -please build CONFIG_KPROBE_EVENTS"
|
||||
exit_unresolved;
|
||||
fi
|
||||
|
||||
echo "Let the module run a little"
|
||||
sleep 1
|
||||
|
||||
grep -q "my_direct_func: waking up" trace
|
||||
|
||||
rmmod ftrace-direct
|
||||
|
||||
echo 'p:kwake wake_up_process task=$arg1' > kprobe_events
|
||||
|
||||
start_direct() {
|
||||
echo > trace
|
||||
modprobe ftrace-direct
|
||||
sleep 1
|
||||
grep -q "my_direct_func: waking up" trace
|
||||
}
|
||||
|
||||
stop_direct() {
|
||||
rmmod ftrace-direct
|
||||
}
|
||||
|
||||
enable_probe() {
|
||||
echo > trace
|
||||
echo 1 > events/kprobes/kwake/enable
|
||||
sleep 1
|
||||
grep -q "kwake:" trace
|
||||
}
|
||||
|
||||
disable_probe() {
|
||||
echo 0 > events/kprobes/kwake/enable
|
||||
}
|
||||
|
||||
test_kprobes() {
|
||||
# probe -> direct -> no direct > no probe
|
||||
enable_probe
|
||||
start_direct
|
||||
stop_direct
|
||||
disable_probe
|
||||
|
||||
# probe -> direct -> no probe > no direct
|
||||
enable_probe
|
||||
start_direct
|
||||
disable_probe
|
||||
stop_direct
|
||||
|
||||
# direct -> probe -> no probe > no direct
|
||||
start_direct
|
||||
enable_probe
|
||||
disable_probe
|
||||
stop_direct
|
||||
|
||||
# direct -> probe -> no direct > no noprobe
|
||||
start_direct
|
||||
enable_probe
|
||||
stop_direct
|
||||
disable_probe
|
||||
}
|
||||
|
||||
test_kprobes
|
||||
|
||||
# Now do this with a second registered direct function
|
||||
echo "Running with another ftrace direct function"
|
||||
|
||||
modprobe ftrace-direct-too
|
||||
|
||||
test_kprobes
|
||||
|
||||
rmmod ftrace-direct-too
|
||||
|
||||
echo > kprobe_events
|
|
@ -5,6 +5,7 @@ TEST_PROGS := \
|
|||
test-livepatch.sh \
|
||||
test-callbacks.sh \
|
||||
test-shadow-vars.sh \
|
||||
test-state.sh
|
||||
test-state.sh \
|
||||
test-ftrace.sh
|
||||
|
||||
include ../lib.mk
|
||||
|
|
|
@ -29,29 +29,45 @@ function die() {
|
|||
exit 1
|
||||
}
|
||||
|
||||
function push_dynamic_debug() {
|
||||
function push_config() {
|
||||
DYNAMIC_DEBUG=$(grep '^kernel/livepatch' /sys/kernel/debug/dynamic_debug/control | \
|
||||
awk -F'[: ]' '{print "file " $1 " line " $2 " " $4}')
|
||||
FTRACE_ENABLED=$(sysctl --values kernel.ftrace_enabled)
|
||||
}
|
||||
|
||||
function pop_dynamic_debug() {
|
||||
function pop_config() {
|
||||
if [[ -n "$DYNAMIC_DEBUG" ]]; then
|
||||
echo -n "$DYNAMIC_DEBUG" > /sys/kernel/debug/dynamic_debug/control
|
||||
fi
|
||||
if [[ -n "$FTRACE_ENABLED" ]]; then
|
||||
sysctl kernel.ftrace_enabled="$FTRACE_ENABLED" &> /dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
# set_dynamic_debug() - save the current dynamic debug config and tweak
|
||||
# it for the self-tests. Set a script exit trap
|
||||
# that restores the original config.
|
||||
function set_dynamic_debug() {
|
||||
push_dynamic_debug
|
||||
trap pop_dynamic_debug EXIT INT TERM HUP
|
||||
cat <<-EOF > /sys/kernel/debug/dynamic_debug/control
|
||||
file kernel/livepatch/* +p
|
||||
func klp_try_switch_task -p
|
||||
EOF
|
||||
}
|
||||
|
||||
function set_ftrace_enabled() {
|
||||
local sysctl="$1"
|
||||
result=$(sysctl kernel.ftrace_enabled="$1" 2>&1 | paste --serial --delimiters=' ')
|
||||
echo "livepatch: $result" > /dev/kmsg
|
||||
}
|
||||
|
||||
# setup_config - save the current config and set a script exit trap that
|
||||
# restores the original config. Setup the dynamic debug
|
||||
# for verbose livepatching output and turn on
|
||||
# the ftrace_enabled sysctl.
|
||||
function setup_config() {
|
||||
push_config
|
||||
set_dynamic_debug
|
||||
set_ftrace_enabled 1
|
||||
trap pop_config EXIT INT TERM HUP
|
||||
}
|
||||
|
||||
# loop_until(cmd) - loop a command until it is successful or $MAX_RETRIES,
|
||||
# sleep $RETRY_INTERVAL between attempts
|
||||
# cmd - command and its arguments to run
|
||||
|
|
|
@ -9,7 +9,7 @@ MOD_LIVEPATCH2=test_klp_callbacks_demo2
|
|||
MOD_TARGET=test_klp_callbacks_mod
|
||||
MOD_TARGET_BUSY=test_klp_callbacks_busy
|
||||
|
||||
set_dynamic_debug
|
||||
setup_config
|
||||
|
||||
|
||||
# TEST: target module before livepatch
|
||||
|
|
|
@ -0,0 +1,65 @@
|
|||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# Copyright (C) 2019 Joe Lawrence <joe.lawrence@redhat.com>
|
||||
|
||||
. $(dirname $0)/functions.sh
|
||||
|
||||
MOD_LIVEPATCH=test_klp_livepatch
|
||||
|
||||
setup_config
|
||||
|
||||
|
||||
# TEST: livepatch interaction with ftrace_enabled sysctl
|
||||
# - turn ftrace_enabled OFF and verify livepatches can't load
|
||||
# - turn ftrace_enabled ON and verify livepatch can load
|
||||
# - verify that ftrace_enabled can't be turned OFF while a livepatch is loaded
|
||||
|
||||
echo -n "TEST: livepatch interaction with ftrace_enabled sysctl ... "
|
||||
dmesg -C
|
||||
|
||||
set_ftrace_enabled 0
|
||||
load_failing_mod $MOD_LIVEPATCH
|
||||
|
||||
set_ftrace_enabled 1
|
||||
load_lp $MOD_LIVEPATCH
|
||||
if [[ "$(cat /proc/cmdline)" != "$MOD_LIVEPATCH: this has been live patched" ]] ; then
|
||||
echo -e "FAIL\n\n"
|
||||
die "livepatch kselftest(s) failed"
|
||||
fi
|
||||
|
||||
set_ftrace_enabled 0
|
||||
if [[ "$(cat /proc/cmdline)" != "$MOD_LIVEPATCH: this has been live patched" ]] ; then
|
||||
echo -e "FAIL\n\n"
|
||||
die "livepatch kselftest(s) failed"
|
||||
fi
|
||||
disable_lp $MOD_LIVEPATCH
|
||||
unload_lp $MOD_LIVEPATCH
|
||||
|
||||
check_result "livepatch: kernel.ftrace_enabled = 0
|
||||
% modprobe $MOD_LIVEPATCH
|
||||
livepatch: enabling patch '$MOD_LIVEPATCH'
|
||||
livepatch: '$MOD_LIVEPATCH': initializing patching transition
|
||||
livepatch: failed to register ftrace handler for function 'cmdline_proc_show' (-16)
|
||||
livepatch: failed to patch object 'vmlinux'
|
||||
livepatch: failed to enable patch '$MOD_LIVEPATCH'
|
||||
livepatch: '$MOD_LIVEPATCH': canceling patching transition, going to unpatch
|
||||
livepatch: '$MOD_LIVEPATCH': completing unpatching transition
|
||||
livepatch: '$MOD_LIVEPATCH': unpatching complete
|
||||
modprobe: ERROR: could not insert '$MOD_LIVEPATCH': Device or resource busy
|
||||
livepatch: kernel.ftrace_enabled = 1
|
||||
% modprobe $MOD_LIVEPATCH
|
||||
livepatch: enabling patch '$MOD_LIVEPATCH'
|
||||
livepatch: '$MOD_LIVEPATCH': initializing patching transition
|
||||
livepatch: '$MOD_LIVEPATCH': starting patching transition
|
||||
livepatch: '$MOD_LIVEPATCH': completing patching transition
|
||||
livepatch: '$MOD_LIVEPATCH': patching complete
|
||||
livepatch: sysctl: setting key \"kernel.ftrace_enabled\": Device or resource busy kernel.ftrace_enabled = 0
|
||||
% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
|
||||
livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
|
||||
livepatch: '$MOD_LIVEPATCH': starting unpatching transition
|
||||
livepatch: '$MOD_LIVEPATCH': completing unpatching transition
|
||||
livepatch: '$MOD_LIVEPATCH': unpatching complete
|
||||
% rmmod $MOD_LIVEPATCH"
|
||||
|
||||
|
||||
exit 0
|
|
@ -7,7 +7,7 @@
|
|||
MOD_LIVEPATCH=test_klp_livepatch
|
||||
MOD_REPLACE=test_klp_atomic_replace
|
||||
|
||||
set_dynamic_debug
|
||||
setup_config
|
||||
|
||||
|
||||
# TEST: basic function patching
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
MOD_TEST=test_klp_shadow_vars
|
||||
|
||||
set_dynamic_debug
|
||||
setup_config
|
||||
|
||||
|
||||
# TEST: basic shadow variable API
|
||||
|
|
Loading…
Reference in New Issue