tracing: Add and use generic set_trigger_filter() implementation
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
trigger has a filter associated with it, the trigger invocation now
needs to happen after the { assign; } part of the call, in order for
the trigger condition to be tested.
There's still a SOFT_DISABLED-only check at the top of e.g. the
ftrace_raw_events function, so when an event is soft disabled but not
because of the presence of a trigger, the original SOFT_DISABLED
behavior remains unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
To simplify the above and make it more efficient, the TRIGGER_COND bit
is introduced, which is set only if a soft-disabled trigger needs to
use the log record for filter testing or needs to wait until the
current log record is closed.
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:29 +08:00
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
#ifndef _LINUX_FTRACE_EVENT_H
|
|
|
|
#define _LINUX_FTRACE_EVENT_H
|
|
|
|
|
|
|
|
#include <linux/ring_buffer.h>
|
2009-09-13 07:04:54 +08:00
|
|
|
#include <linux/trace_seq.h>
|
2009-05-27 02:25:22 +08:00
|
|
|
#include <linux/percpu.h>
|
2009-09-18 12:10:28 +08:00
|
|
|
#include <linux/hardirq.h>
|
2010-01-28 09:32:29 +08:00
|
|
|
#include <linux/perf_event.h>
|
2014-04-09 05:26:21 +08:00
|
|
|
#include <linux/tracepoint.h>
|
2009-04-13 23:20:49 +08:00
|
|
|
|
|
|
|
struct trace_array;
|
tracing: Consolidate max_tr into main trace_array structure
Currently, the way the latency tracers and snapshot feature works
is to have a separate trace_array called "max_tr" that holds the
snapshot buffer. For latency tracers, this snapshot buffer is used
to swap the running buffer with this buffer to save the current max
latency.
The only items needed for the max_tr is really just a copy of the buffer
itself, the per_cpu data pointers, the time_start timestamp that states
when the max latency was triggered, and the cpu that the max latency
was triggered on. All other fields in trace_array are unused by the
max_tr, making the max_tr mostly bloat.
This change removes the max_tr completely, and adds a new structure
called trace_buffer, that holds the buffer pointer, the per_cpu data
pointers, the time_start timestamp, and the cpu where the latency occurred.
The trace_array, now has two trace_buffers, one for the normal trace and
one for the max trace or snapshot. By doing this, not only do we remove
the bloat from the max_trace but the instances of traces can now use
their own snapshot feature and not have just the top level global_trace have
the snapshot feature and latency tracers for itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-03-05 22:24:35 +08:00
|
|
|
struct trace_buffer;
|
2009-04-13 23:20:49 +08:00
|
|
|
struct tracer;
|
2009-04-11 02:53:50 +08:00
|
|
|
struct dentry;
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2009-05-27 02:25:22 +08:00
|
|
|
struct trace_print_flags {
|
|
|
|
unsigned long mask;
|
|
|
|
const char *name;
|
|
|
|
};
|
|
|
|
|
2011-04-19 09:35:28 +08:00
|
|
|
struct trace_print_flags_u64 {
|
|
|
|
unsigned long long mask;
|
|
|
|
const char *name;
|
|
|
|
};
|
|
|
|
|
2009-05-27 02:25:22 +08:00
|
|
|
const char *ftrace_print_flags_seq(struct trace_seq *p, const char *delim,
|
|
|
|
unsigned long flags,
|
|
|
|
const struct trace_print_flags *flag_array);
|
|
|
|
|
2009-05-21 07:21:47 +08:00
|
|
|
const char *ftrace_print_symbols_seq(struct trace_seq *p, unsigned long val,
|
|
|
|
const struct trace_print_flags *symbol_array);
|
|
|
|
|
2011-04-19 09:35:28 +08:00
|
|
|
#if BITS_PER_LONG == 32
|
|
|
|
const char *ftrace_print_symbols_seq_u64(struct trace_seq *p,
|
|
|
|
unsigned long long val,
|
|
|
|
const struct trace_print_flags_u64
|
|
|
|
*symbol_array);
|
|
|
|
#endif
|
|
|
|
|
2010-04-01 19:40:58 +08:00
|
|
|
const char *ftrace_print_hex_seq(struct trace_seq *p,
|
|
|
|
const unsigned char *buf, int len);
|
|
|
|
|
2013-02-21 10:32:38 +08:00
|
|
|
struct trace_iterator;
|
|
|
|
struct trace_event;
|
|
|
|
|
|
|
|
int ftrace_raw_output_prep(struct trace_iterator *iter,
|
|
|
|
struct trace_event *event);
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
/*
|
|
|
|
* The trace entry - the most basic unit of tracing. This is what
|
|
|
|
* is printed in the end as a single line in the trace output, such as:
|
|
|
|
*
|
|
|
|
* bash-15816 [01] 235.197585: idle_cpu <- irq_enter
|
|
|
|
*/
|
|
|
|
struct trace_entry {
|
2009-03-26 23:03:29 +08:00
|
|
|
unsigned short type;
|
2009-04-13 23:20:49 +08:00
|
|
|
unsigned char flags;
|
|
|
|
unsigned char preempt_count;
|
|
|
|
int pid;
|
|
|
|
};
|
|
|
|
|
2009-03-26 23:03:29 +08:00
|
|
|
#define FTRACE_MAX_EVENT \
|
|
|
|
((1 << (sizeof(((struct trace_entry *)0)->type) * 8)) - 1)
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
/*
|
|
|
|
* Trace iterator - used by printout routines who present trace
|
|
|
|
* results to users and which routines might sleep, etc:
|
|
|
|
*/
|
|
|
|
struct trace_iterator {
|
|
|
|
struct trace_array *tr;
|
|
|
|
struct tracer *trace;
|
tracing: Consolidate max_tr into main trace_array structure
Currently, the way the latency tracers and snapshot feature works
is to have a separate trace_array called "max_tr" that holds the
snapshot buffer. For latency tracers, this snapshot buffer is used
to swap the running buffer with this buffer to save the current max
latency.
The only items needed for the max_tr is really just a copy of the buffer
itself, the per_cpu data pointers, the time_start timestamp that states
when the max latency was triggered, and the cpu that the max latency
was triggered on. All other fields in trace_array are unused by the
max_tr, making the max_tr mostly bloat.
This change removes the max_tr completely, and adds a new structure
called trace_buffer, that holds the buffer pointer, the per_cpu data
pointers, the time_start timestamp, and the cpu where the latency occurred.
The trace_array, now has two trace_buffers, one for the normal trace and
one for the max trace or snapshot. By doing this, not only do we remove
the bloat from the max_trace but the instances of traces can now use
their own snapshot feature and not have just the top level global_trace have
the snapshot feature and latency tracers for itself.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-03-05 22:24:35 +08:00
|
|
|
struct trace_buffer *trace_buffer;
|
2009-04-13 23:20:49 +08:00
|
|
|
void *private;
|
|
|
|
int cpu_file;
|
|
|
|
struct mutex mutex;
|
2012-06-28 08:46:14 +08:00
|
|
|
struct ring_buffer_iter **buffer_iter;
|
2009-06-02 03:16:05 +08:00
|
|
|
unsigned long iter_flags;
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2010-06-03 18:26:24 +08:00
|
|
|
/* trace_seq for __print_flags() and __print_symbolic() etc. */
|
|
|
|
struct trace_seq tmp_seq;
|
|
|
|
|
2013-08-03 01:16:43 +08:00
|
|
|
cpumask_var_t started;
|
|
|
|
|
|
|
|
/* it's true when current open file is snapshot */
|
|
|
|
bool snapshot;
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
/* The below is zeroed out in pipe_read */
|
|
|
|
struct trace_seq seq;
|
|
|
|
struct trace_entry *ent;
|
2010-04-01 07:49:26 +08:00
|
|
|
unsigned long lost_events;
|
2009-12-07 22:11:39 +08:00
|
|
|
int leftover;
|
2011-07-15 04:36:53 +08:00
|
|
|
int ent_size;
|
2009-04-13 23:20:49 +08:00
|
|
|
int cpu;
|
|
|
|
u64 ts;
|
|
|
|
|
|
|
|
loff_t pos;
|
|
|
|
long idx;
|
|
|
|
|
2013-08-03 01:16:43 +08:00
|
|
|
/* All new field here will be zeroed out in pipe_read */
|
2009-04-13 23:20:49 +08:00
|
|
|
};
|
|
|
|
|
2012-11-14 04:18:22 +08:00
|
|
|
enum trace_iter_flags {
|
|
|
|
TRACE_FILE_LAT_FMT = 1,
|
|
|
|
TRACE_FILE_ANNOTATE = 2,
|
|
|
|
TRACE_FILE_TIME_IN_NS = 4,
|
|
|
|
};
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
|
|
|
|
typedef enum print_line_t (*trace_print_func)(struct trace_iterator *iter,
|
2010-04-23 06:46:14 +08:00
|
|
|
int flags, struct trace_event *event);
|
|
|
|
|
|
|
|
struct trace_event_functions {
|
2009-04-13 23:20:49 +08:00
|
|
|
trace_print_func trace;
|
|
|
|
trace_print_func raw;
|
|
|
|
trace_print_func hex;
|
|
|
|
trace_print_func binary;
|
|
|
|
};
|
|
|
|
|
2010-04-23 06:46:14 +08:00
|
|
|
struct trace_event {
|
|
|
|
struct hlist_node node;
|
|
|
|
struct list_head list;
|
|
|
|
int type;
|
|
|
|
struct trace_event_functions *funcs;
|
|
|
|
};
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
extern int register_ftrace_event(struct trace_event *event);
|
|
|
|
extern int unregister_ftrace_event(struct trace_event *event);
|
|
|
|
|
|
|
|
/* Return values for print_line callback */
|
|
|
|
enum print_line_t {
|
|
|
|
TRACE_TYPE_PARTIAL_LINE = 0, /* Retry after flushing the seq */
|
|
|
|
TRACE_TYPE_HANDLED = 1,
|
|
|
|
TRACE_TYPE_UNHANDLED = 2, /* Relay to other output functions */
|
|
|
|
TRACE_TYPE_NO_CONSUME = 3 /* Handled but ask to not consume */
|
|
|
|
};
|
|
|
|
|
2009-08-07 07:25:54 +08:00
|
|
|
void tracing_generic_entry_update(struct trace_entry *entry,
|
|
|
|
unsigned long flags,
|
|
|
|
int pc);
|
2012-08-02 22:32:10 +08:00
|
|
|
struct ftrace_event_file;
|
|
|
|
|
|
|
|
struct ring_buffer_event *
|
|
|
|
trace_event_buffer_lock_reserve(struct ring_buffer **current_buffer,
|
|
|
|
struct ftrace_event_file *ftrace_file,
|
|
|
|
int type, unsigned long len,
|
|
|
|
unsigned long flags, int pc);
|
2009-04-13 23:20:49 +08:00
|
|
|
struct ring_buffer_event *
|
2009-09-03 02:17:06 +08:00
|
|
|
trace_current_buffer_lock_reserve(struct ring_buffer **current_buffer,
|
|
|
|
int type, unsigned long len,
|
2009-04-13 23:20:49 +08:00
|
|
|
unsigned long flags, int pc);
|
2009-09-03 02:17:06 +08:00
|
|
|
void trace_current_buffer_unlock_commit(struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
2009-04-13 23:20:49 +08:00
|
|
|
unsigned long flags, int pc);
|
2012-11-02 08:54:21 +08:00
|
|
|
void trace_buffer_unlock_commit(struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
|
|
|
unsigned long flags, int pc);
|
|
|
|
void trace_buffer_unlock_commit_regs(struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
|
|
|
unsigned long flags, int pc,
|
|
|
|
struct pt_regs *regs);
|
2009-09-03 02:17:06 +08:00
|
|
|
void trace_current_buffer_discard_commit(struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event);
|
2009-04-13 23:20:49 +08:00
|
|
|
|
|
|
|
void tracing_record_cmdline(struct task_struct *tsk);
|
|
|
|
|
2012-08-10 07:16:14 +08:00
|
|
|
int ftrace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...);
|
|
|
|
|
2009-07-20 10:20:53 +08:00
|
|
|
struct event_filter;
|
|
|
|
|
2010-04-22 00:27:06 +08:00
|
|
|
enum trace_reg {
|
|
|
|
TRACE_REG_REGISTER,
|
|
|
|
TRACE_REG_UNREGISTER,
|
2012-03-14 07:03:02 +08:00
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
2010-04-22 00:27:06 +08:00
|
|
|
TRACE_REG_PERF_REGISTER,
|
|
|
|
TRACE_REG_PERF_UNREGISTER,
|
2012-02-15 22:51:49 +08:00
|
|
|
TRACE_REG_PERF_OPEN,
|
|
|
|
TRACE_REG_PERF_CLOSE,
|
2012-02-15 22:51:50 +08:00
|
|
|
TRACE_REG_PERF_ADD,
|
|
|
|
TRACE_REG_PERF_DEL,
|
2012-03-14 07:03:02 +08:00
|
|
|
#endif
|
2010-04-22 00:27:06 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct ftrace_event_call;
|
|
|
|
|
2010-04-20 22:47:33 +08:00
|
|
|
struct ftrace_event_class {
|
|
|
|
char *system;
|
2010-04-22 00:27:06 +08:00
|
|
|
void *probe;
|
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
|
|
void *perf_probe;
|
|
|
|
#endif
|
|
|
|
int (*reg)(struct ftrace_event_call *event,
|
2012-02-15 22:51:49 +08:00
|
|
|
enum trace_reg type, void *data);
|
2010-04-22 22:35:55 +08:00
|
|
|
int (*define_fields)(struct ftrace_event_call *);
|
|
|
|
struct list_head *(*get_fields)(struct ftrace_event_call *);
|
|
|
|
struct list_head fields;
|
2010-04-22 23:46:44 +08:00
|
|
|
int (*raw_init)(struct ftrace_event_call *);
|
2010-04-20 22:47:33 +08:00
|
|
|
};
|
|
|
|
|
2010-06-08 23:22:06 +08:00
|
|
|
extern int ftrace_event_reg(struct ftrace_event_call *event,
|
2012-02-15 22:51:49 +08:00
|
|
|
enum trace_reg type, void *data);
|
2010-06-08 23:22:06 +08:00
|
|
|
|
2012-08-10 07:16:14 +08:00
|
|
|
int ftrace_output_event(struct trace_iterator *iter, struct ftrace_event_call *event,
|
|
|
|
char *fmt, ...);
|
|
|
|
|
2012-08-10 10:26:46 +08:00
|
|
|
int ftrace_event_define_field(struct ftrace_event_call *call,
|
|
|
|
char *type, int len, char *item, int offset,
|
|
|
|
int field_size, int sign, int filter);
|
|
|
|
|
2012-08-10 10:42:57 +08:00
|
|
|
struct ftrace_event_buffer {
|
|
|
|
struct ring_buffer *buffer;
|
|
|
|
struct ring_buffer_event *event;
|
|
|
|
struct ftrace_event_file *ftrace_file;
|
|
|
|
void *entry;
|
|
|
|
unsigned long flags;
|
|
|
|
int pc;
|
|
|
|
};
|
|
|
|
|
|
|
|
void *ftrace_event_buffer_reserve(struct ftrace_event_buffer *fbuffer,
|
|
|
|
struct ftrace_event_file *ftrace_file,
|
|
|
|
unsigned long len);
|
|
|
|
|
|
|
|
void ftrace_event_buffer_commit(struct ftrace_event_buffer *fbuffer);
|
|
|
|
|
2014-03-21 20:23:38 +08:00
|
|
|
int ftrace_event_define_field(struct ftrace_event_call *call,
|
|
|
|
char *type, int len, char *item, int offset,
|
|
|
|
int field_size, int sign, int filter);
|
|
|
|
|
2010-04-23 23:12:36 +08:00
|
|
|
enum {
|
|
|
|
TRACE_EVENT_FL_FILTERED_BIT,
|
2010-11-18 08:39:17 +08:00
|
|
|
TRACE_EVENT_FL_CAP_ANY_BIT,
|
2011-11-01 09:09:35 +08:00
|
|
|
TRACE_EVENT_FL_NO_SET_FILTER_BIT,
|
2012-05-11 03:55:43 +08:00
|
|
|
TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
|
2013-03-05 12:05:12 +08:00
|
|
|
TRACE_EVENT_FL_WAS_ENABLED_BIT,
|
2013-10-24 21:34:17 +08:00
|
|
|
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
|
2014-04-09 05:26:21 +08:00
|
|
|
TRACE_EVENT_FL_TRACEPOINT_BIT,
|
2010-04-23 23:12:36 +08:00
|
|
|
};
|
|
|
|
|
2012-05-04 11:09:03 +08:00
|
|
|
/*
|
|
|
|
* Event flags:
|
|
|
|
* FILTERED - The event has a filter attached
|
|
|
|
* CAP_ANY - Any user can enable for perf
|
|
|
|
* NO_SET_FILTER - Set when filter has error and is to be ignored
|
2013-03-05 11:27:04 +08:00
|
|
|
* IGNORE_ENABLE - For ftrace internal events, do not enable with debugfs file
|
2013-03-05 12:05:12 +08:00
|
|
|
* WAS_ENABLED - Set and stays set when an event was ever enabled
|
|
|
|
* (used for module unloading, if a module event is enabled,
|
|
|
|
* it is best to clear the buffers that used it).
|
2013-10-24 21:34:17 +08:00
|
|
|
* USE_CALL_FILTER - For ftrace internal events, don't use file filter
|
2014-04-09 05:26:21 +08:00
|
|
|
* TRACEPOINT - Event is a tracepoint
|
2012-05-04 11:09:03 +08:00
|
|
|
*/
|
2010-04-23 23:12:36 +08:00
|
|
|
enum {
|
2010-07-02 11:07:32 +08:00
|
|
|
TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
|
2010-11-18 08:39:17 +08:00
|
|
|
TRACE_EVENT_FL_CAP_ANY = (1 << TRACE_EVENT_FL_CAP_ANY_BIT),
|
2011-11-01 09:09:35 +08:00
|
|
|
TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT),
|
2012-05-11 03:55:43 +08:00
|
|
|
TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT),
|
2013-03-05 12:05:12 +08:00
|
|
|
TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT),
|
2013-10-24 21:34:17 +08:00
|
|
|
TRACE_EVENT_FL_USE_CALL_FILTER = (1 << TRACE_EVENT_FL_USE_CALL_FILTER_BIT),
|
2014-04-09 05:26:21 +08:00
|
|
|
TRACE_EVENT_FL_TRACEPOINT = (1 << TRACE_EVENT_FL_TRACEPOINT_BIT),
|
2010-04-23 23:12:36 +08:00
|
|
|
};
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
struct ftrace_event_call {
|
2009-04-11 01:52:20 +08:00
|
|
|
struct list_head list;
|
2010-04-20 22:47:33 +08:00
|
|
|
struct ftrace_event_class *class;
|
2014-04-09 05:26:21 +08:00
|
|
|
union {
|
|
|
|
char *name;
|
|
|
|
/* Set TRACE_EVENT_FL_TRACEPOINT flag when using "tp" */
|
|
|
|
struct tracepoint *tp;
|
|
|
|
};
|
2010-04-23 22:00:22 +08:00
|
|
|
struct trace_event event;
|
2009-12-15 15:39:42 +08:00
|
|
|
const char *print_fmt;
|
2009-07-20 10:20:53 +08:00
|
|
|
struct event_filter *filter;
|
2012-05-04 11:09:03 +08:00
|
|
|
struct list_head *files;
|
2009-04-11 02:53:50 +08:00
|
|
|
void *mod;
|
2009-08-11 04:52:44 +08:00
|
|
|
void *data;
|
2013-03-13 00:38:06 +08:00
|
|
|
/*
|
|
|
|
* bit 0: filter_active
|
|
|
|
* bit 1: allow trace by non root (cap any)
|
|
|
|
* bit 2: failed to apply filter
|
|
|
|
* bit 3: ftrace internal event (do not enable)
|
|
|
|
* bit 4: Event was enabled by module
|
2013-10-24 21:34:17 +08:00
|
|
|
* bit 5: use call filter rather than file filter
|
2014-04-09 05:26:21 +08:00
|
|
|
* bit 6: Event is a tracepoint
|
2013-03-13 00:38:06 +08:00
|
|
|
*/
|
2012-05-04 11:09:03 +08:00
|
|
|
int flags; /* static flags of different events */
|
|
|
|
|
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
|
|
int perf_refcount;
|
|
|
|
struct hlist_head __percpu *perf_events;
|
2013-11-14 23:23:04 +08:00
|
|
|
|
|
|
|
int (*perf_perm)(struct ftrace_event_call *,
|
|
|
|
struct perf_event *);
|
2012-05-04 11:09:03 +08:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
2014-04-09 05:26:21 +08:00
|
|
|
static inline const char *
|
|
|
|
ftrace_event_name(struct ftrace_event_call *call)
|
|
|
|
{
|
|
|
|
if (call->flags & TRACE_EVENT_FL_TRACEPOINT)
|
|
|
|
return call->tp ? call->tp->name : NULL;
|
|
|
|
else
|
|
|
|
return call->name;
|
|
|
|
}
|
|
|
|
|
2012-05-04 11:09:03 +08:00
|
|
|
struct trace_array;
|
|
|
|
struct ftrace_subsystem_dir;
|
|
|
|
|
|
|
|
enum {
|
|
|
|
FTRACE_EVENT_FL_ENABLED_BIT,
|
|
|
|
FTRACE_EVENT_FL_RECORDED_CMD_BIT,
|
2013-10-24 21:34:17 +08:00
|
|
|
FTRACE_EVENT_FL_FILTERED_BIT,
|
|
|
|
FTRACE_EVENT_FL_NO_SET_FILTER_BIT,
|
2013-03-13 01:26:18 +08:00
|
|
|
FTRACE_EVENT_FL_SOFT_MODE_BIT,
|
|
|
|
FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
FTRACE_EVENT_FL_TRIGGER_MODE_BIT,
|
tracing: Add and use generic set_trigger_filter() implementation
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
trigger has a filter associated with it, the trigger invocation now
needs to happen after the { assign; } part of the call, in order for
the trigger condition to be tested.
There's still a SOFT_DISABLED-only check at the top of e.g. the
ftrace_raw_events function, so when an event is soft disabled but not
because of the presence of a trigger, the original SOFT_DISABLED
behavior remains unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
To simplify the above and make it more efficient, the TRIGGER_COND bit
is introduced, which is set only if a soft-disabled trigger needs to
use the log record for filter testing or needs to wait until the
current log record is closed.
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:29 +08:00
|
|
|
FTRACE_EVENT_FL_TRIGGER_COND_BIT,
|
2012-05-04 11:09:03 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ftrace event file flags:
|
2013-03-13 00:38:06 +08:00
|
|
|
* ENABLED - The event is enabled
|
2012-05-04 11:09:03 +08:00
|
|
|
* RECORDED_CMD - The comms should be recorded at sched_switch
|
2013-10-24 21:34:17 +08:00
|
|
|
* FILTERED - The event has a filter attached
|
|
|
|
* NO_SET_FILTER - Set when filter has error and is to be ignored
|
2013-03-13 01:26:18 +08:00
|
|
|
* SOFT_MODE - The event is enabled/disabled by SOFT_DISABLED
|
|
|
|
* SOFT_DISABLED - When set, do not trace the event (even though its
|
|
|
|
* tracepoint may be enabled)
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
* TRIGGER_MODE - When set, invoke the triggers associated with the event
|
tracing: Add and use generic set_trigger_filter() implementation
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
trigger has a filter associated with it, the trigger invocation now
needs to happen after the { assign; } part of the call, in order for
the trigger condition to be tested.
There's still a SOFT_DISABLED-only check at the top of e.g. the
ftrace_raw_events function, so when an event is soft disabled but not
because of the presence of a trigger, the original SOFT_DISABLED
behavior remains unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
To simplify the above and make it more efficient, the TRIGGER_COND bit
is introduced, which is set only if a soft-disabled trigger needs to
use the log record for filter testing or needs to wait until the
current log record is closed.
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:29 +08:00
|
|
|
* TRIGGER_COND - When set, one or more triggers has an associated filter
|
2012-05-04 11:09:03 +08:00
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
FTRACE_EVENT_FL_ENABLED = (1 << FTRACE_EVENT_FL_ENABLED_BIT),
|
|
|
|
FTRACE_EVENT_FL_RECORDED_CMD = (1 << FTRACE_EVENT_FL_RECORDED_CMD_BIT),
|
2013-10-24 21:34:17 +08:00
|
|
|
FTRACE_EVENT_FL_FILTERED = (1 << FTRACE_EVENT_FL_FILTERED_BIT),
|
|
|
|
FTRACE_EVENT_FL_NO_SET_FILTER = (1 << FTRACE_EVENT_FL_NO_SET_FILTER_BIT),
|
2013-03-13 01:26:18 +08:00
|
|
|
FTRACE_EVENT_FL_SOFT_MODE = (1 << FTRACE_EVENT_FL_SOFT_MODE_BIT),
|
|
|
|
FTRACE_EVENT_FL_SOFT_DISABLED = (1 << FTRACE_EVENT_FL_SOFT_DISABLED_BIT),
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
FTRACE_EVENT_FL_TRIGGER_MODE = (1 << FTRACE_EVENT_FL_TRIGGER_MODE_BIT),
|
tracing: Add and use generic set_trigger_filter() implementation
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
trigger has a filter associated with it, the trigger invocation now
needs to happen after the { assign; } part of the call, in order for
the trigger condition to be tested.
There's still a SOFT_DISABLED-only check at the top of e.g. the
ftrace_raw_events function, so when an event is soft disabled but not
because of the presence of a trigger, the original SOFT_DISABLED
behavior remains unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
To simplify the above and make it more efficient, the TRIGGER_COND bit
is introduced, which is set only if a soft-disabled trigger needs to
use the log record for filter testing or needs to wait until the
current log record is closed.
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:29 +08:00
|
|
|
FTRACE_EVENT_FL_TRIGGER_COND = (1 << FTRACE_EVENT_FL_TRIGGER_COND_BIT),
|
2012-05-04 11:09:03 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct ftrace_event_file {
|
|
|
|
struct list_head list;
|
|
|
|
struct ftrace_event_call *event_call;
|
2013-10-24 21:34:17 +08:00
|
|
|
struct event_filter *filter;
|
2012-05-04 11:09:03 +08:00
|
|
|
struct dentry *dir;
|
|
|
|
struct trace_array *tr;
|
|
|
|
struct ftrace_subsystem_dir *system;
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
struct list_head triggers;
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2010-04-23 23:12:36 +08:00
|
|
|
/*
|
|
|
|
* 32 bit flags:
|
2013-03-13 00:38:06 +08:00
|
|
|
* bit 0: enabled
|
|
|
|
* bit 1: enabled cmd record
|
2013-03-13 01:26:18 +08:00
|
|
|
* bit 2: enable/disable with the soft disable bit
|
|
|
|
* bit 3: soft disabled
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
* bit 4: trigger enabled
|
2010-04-23 23:12:36 +08:00
|
|
|
*
|
2013-03-13 01:26:18 +08:00
|
|
|
* Note: The bits must be set atomically to prevent races
|
|
|
|
* from other writers. Reads of flags do not need to be in
|
|
|
|
* sync as they occur in critical sections. But the way flags
|
2012-05-04 11:09:03 +08:00
|
|
|
* is currently used, these changes do not affect the code
|
2010-05-14 22:19:13 +08:00
|
|
|
* except that when a change is made, it may have a slight
|
|
|
|
* delay in propagating the changes to other CPUs due to
|
2013-03-13 01:26:18 +08:00
|
|
|
* caching and such. Which is mostly OK ;-)
|
2010-04-23 23:12:36 +08:00
|
|
|
*/
|
2013-03-13 01:26:18 +08:00
|
|
|
unsigned long flags;
|
2013-05-09 13:44:29 +08:00
|
|
|
atomic_t sm_ref; /* soft-mode reference counter */
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
atomic_t tm_ref; /* trigger-mode reference counter */
|
2009-04-13 23:20:49 +08:00
|
|
|
};
|
|
|
|
|
2010-11-18 09:11:42 +08:00
|
|
|
#define __TRACE_EVENT_FLAGS(name, value) \
|
|
|
|
static int __init trace_init_flags_##name(void) \
|
|
|
|
{ \
|
2014-04-09 05:26:21 +08:00
|
|
|
event_##name.flags |= value; \
|
2010-11-18 09:11:42 +08:00
|
|
|
return 0; \
|
|
|
|
} \
|
|
|
|
early_initcall(trace_init_flags_##name);
|
|
|
|
|
2013-11-14 23:23:04 +08:00
|
|
|
#define __TRACE_EVENT_PERF_PERM(name, expr...) \
|
|
|
|
static int perf_perm_##name(struct ftrace_event_call *tp_event, \
|
|
|
|
struct perf_event *p_event) \
|
|
|
|
{ \
|
|
|
|
return ({ expr; }); \
|
|
|
|
} \
|
|
|
|
static int __init trace_init_perf_perm_##name(void) \
|
|
|
|
{ \
|
|
|
|
event_##name.perf_perm = &perf_perm_##name; \
|
|
|
|
return 0; \
|
|
|
|
} \
|
|
|
|
early_initcall(trace_init_perf_perm_##name);
|
|
|
|
|
2010-03-05 12:35:37 +08:00
|
|
|
#define PERF_MAX_TRACE_SIZE 2048
|
2009-09-18 12:10:28 +08:00
|
|
|
|
2009-09-13 07:04:54 +08:00
|
|
|
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
|
2009-04-13 23:20:49 +08:00
|
|
|
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
enum event_trigger_type {
|
|
|
|
ETT_NONE = (0),
|
tracing: Add 'traceon' and 'traceoff' event trigger commands
Add 'traceon' and 'traceoff' event_command commands. traceon and
traceoff event triggers are added by the user via these commands in a
similar way and using practically the same syntax as the analagous
'traceon' and 'traceoff' ftrace function commands, but instead of
writing to the set_ftrace_filter file, the traceon and traceoff
triggers are written to the per-event 'trigger' files:
echo 'traceon' > .../tracing/events/somesys/someevent/trigger
echo 'traceoff' > .../tracing/events/somesys/someevent/trigger
The above command will turn tracing on or off whenever someevent is
hit.
This also adds a 'count' version that limits the number of times the
command will be invoked:
echo 'traceon:N' > .../tracing/events/somesys/someevent/trigger
echo 'traceoff:N' > .../tracing/events/somesys/someevent/trigger
Where N is the number of times the command will be invoked.
The above commands will will turn tracing on or off whenever someevent
is hit, but only N times.
Some common register/unregister_trigger() implementations of the
event_command reg()/unreg() callbacks are also provided, which add and
remove trigger instances to the per-event list of triggers, and
arm/disarm them as appropriate. event_trigger_callback() is a
general-purpose event_command func() implementation that orchestrates
command parsing and registration for most normal commands.
Most event commands will use these, but some will override and
possibly reuse them.
The event_trigger_init(), event_trigger_free(), and
event_trigger_print() functions are meant to be common implementations
of the event_trigger_ops init(), free(), and print() ops,
respectively.
Most trigger_ops implementations will use these, but some will
override and possibly reuse them.
Link: http://lkml.kernel.org/r/00a52816703b98d2072947478dd6e2d70cde5197.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:25 +08:00
|
|
|
ETT_TRACE_ONOFF = (1 << 0),
|
2013-10-24 21:59:26 +08:00
|
|
|
ETT_SNAPSHOT = (1 << 1),
|
2013-10-24 21:59:27 +08:00
|
|
|
ETT_STACKTRACE = (1 << 2),
|
2013-10-24 21:59:28 +08:00
|
|
|
ETT_EVENT_ENABLE = (1 << 3),
|
tracing: Add basic event trigger framework
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_type, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
This also moves event_file_data() into trace.h so it can be used
outside of trace_events.c.
Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Idea-by: Steve Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:24 +08:00
|
|
|
};
|
|
|
|
|
2013-10-24 21:34:17 +08:00
|
|
|
extern void destroy_preds(struct ftrace_event_file *file);
|
|
|
|
extern void destroy_call_preds(struct ftrace_event_call *call);
|
2009-10-15 11:21:42 +08:00
|
|
|
extern int filter_match_preds(struct event_filter *filter, void *rec);
|
2013-10-24 21:34:17 +08:00
|
|
|
|
|
|
|
extern int filter_check_discard(struct ftrace_event_file *file, void *rec,
|
|
|
|
struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event);
|
|
|
|
extern int call_filter_check_discard(struct ftrace_event_call *call, void *rec,
|
|
|
|
struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event);
|
tracing: Add and use generic set_trigger_filter() implementation
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
trigger has a filter associated with it, the trigger invocation now
needs to happen after the { assign; } part of the call, in order for
the trigger condition to be tested.
There's still a SOFT_DISABLED-only check at the top of e.g. the
ftrace_raw_events function, so when an event is soft disabled but not
because of the presence of a trigger, the original SOFT_DISABLED
behavior remains unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
To simplify the above and make it more efficient, the TRIGGER_COND bit
is introduced, which is set only if a soft-disabled trigger needs to
use the log record for filter testing or needs to wait until the
current log record is closed.
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-10-24 21:59:29 +08:00
|
|
|
extern enum event_trigger_type event_triggers_call(struct ftrace_event_file *file,
|
|
|
|
void *rec);
|
|
|
|
extern void event_triggers_post_call(struct ftrace_event_file *file,
|
|
|
|
enum event_trigger_type tt);
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2014-01-07 10:32:10 +08:00
|
|
|
/**
|
|
|
|
* ftrace_trigger_soft_disabled - do triggers and test if soft disabled
|
|
|
|
* @file: The file pointer of the event to test
|
|
|
|
*
|
|
|
|
* If any triggers without filters are attached to this event, they
|
|
|
|
* will be called here. If the event is soft disabled and has no
|
|
|
|
* triggers that require testing the fields, it will return true,
|
|
|
|
* otherwise false.
|
|
|
|
*/
|
|
|
|
static inline bool
|
|
|
|
ftrace_trigger_soft_disabled(struct ftrace_event_file *file)
|
|
|
|
{
|
|
|
|
unsigned long eflags = file->flags;
|
|
|
|
|
|
|
|
if (!(eflags & FTRACE_EVENT_FL_TRIGGER_COND)) {
|
|
|
|
if (eflags & FTRACE_EVENT_FL_TRIGGER_MODE)
|
|
|
|
event_triggers_call(file, NULL);
|
|
|
|
if (eflags & FTRACE_EVENT_FL_SOFT_DISABLED)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper function for event_trigger_unlock_commit{_regs}().
|
|
|
|
* If there are event triggers attached to this event that requires
|
|
|
|
* filtering against its fields, then they wil be called as the
|
|
|
|
* entry already holds the field information of the current event.
|
|
|
|
*
|
|
|
|
* It also checks if the event should be discarded or not.
|
|
|
|
* It is to be discarded if the event is soft disabled and the
|
|
|
|
* event was only recorded to process triggers, or if the event
|
|
|
|
* filter is active and this event did not match the filters.
|
|
|
|
*
|
|
|
|
* Returns true if the event is discarded, false otherwise.
|
|
|
|
*/
|
|
|
|
static inline bool
|
|
|
|
__event_trigger_test_discard(struct ftrace_event_file *file,
|
|
|
|
struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
|
|
|
void *entry,
|
|
|
|
enum event_trigger_type *tt)
|
|
|
|
{
|
|
|
|
unsigned long eflags = file->flags;
|
|
|
|
|
|
|
|
if (eflags & FTRACE_EVENT_FL_TRIGGER_COND)
|
|
|
|
*tt = event_triggers_call(file, entry);
|
|
|
|
|
|
|
|
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
|
|
|
|
ring_buffer_discard_commit(buffer, event);
|
|
|
|
else if (!filter_check_discard(file, entry, buffer, event))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* event_trigger_unlock_commit - handle triggers and finish event commit
|
|
|
|
* @file: The file pointer assoctiated to the event
|
|
|
|
* @buffer: The ring buffer that the event is being written to
|
|
|
|
* @event: The event meta data in the ring buffer
|
|
|
|
* @entry: The event itself
|
|
|
|
* @irq_flags: The state of the interrupts at the start of the event
|
|
|
|
* @pc: The state of the preempt count at the start of the event.
|
|
|
|
*
|
|
|
|
* This is a helper function to handle triggers that require data
|
|
|
|
* from the event itself. It also tests the event against filters and
|
|
|
|
* if the event is soft disabled and should be discarded.
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
event_trigger_unlock_commit(struct ftrace_event_file *file,
|
|
|
|
struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
|
|
|
void *entry, unsigned long irq_flags, int pc)
|
|
|
|
{
|
|
|
|
enum event_trigger_type tt = ETT_NONE;
|
|
|
|
|
|
|
|
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
|
|
|
trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
|
|
|
|
|
|
|
|
if (tt)
|
|
|
|
event_triggers_post_call(file, tt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* event_trigger_unlock_commit_regs - handle triggers and finish event commit
|
|
|
|
* @file: The file pointer assoctiated to the event
|
|
|
|
* @buffer: The ring buffer that the event is being written to
|
|
|
|
* @event: The event meta data in the ring buffer
|
|
|
|
* @entry: The event itself
|
|
|
|
* @irq_flags: The state of the interrupts at the start of the event
|
|
|
|
* @pc: The state of the preempt count at the start of the event.
|
|
|
|
*
|
|
|
|
* This is a helper function to handle triggers that require data
|
|
|
|
* from the event itself. It also tests the event against filters and
|
|
|
|
* if the event is soft disabled and should be discarded.
|
|
|
|
*
|
|
|
|
* Same as event_trigger_unlock_commit() but calls
|
|
|
|
* trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
event_trigger_unlock_commit_regs(struct ftrace_event_file *file,
|
|
|
|
struct ring_buffer *buffer,
|
|
|
|
struct ring_buffer_event *event,
|
|
|
|
void *entry, unsigned long irq_flags, int pc,
|
|
|
|
struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
enum event_trigger_type tt = ETT_NONE;
|
|
|
|
|
|
|
|
if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
|
|
|
|
trace_buffer_unlock_commit_regs(buffer, event,
|
|
|
|
irq_flags, pc, regs);
|
|
|
|
|
|
|
|
if (tt)
|
|
|
|
event_triggers_post_call(file, tt);
|
|
|
|
}
|
|
|
|
|
2009-08-07 10:33:22 +08:00
|
|
|
enum {
|
|
|
|
FILTER_OTHER = 0,
|
|
|
|
FILTER_STATIC_STRING,
|
|
|
|
FILTER_DYN_STRING,
|
2009-08-07 10:33:43 +08:00
|
|
|
FILTER_PTR_STRING,
|
2012-02-15 22:51:53 +08:00
|
|
|
FILTER_TRACE_FN,
|
2009-08-07 10:33:22 +08:00
|
|
|
};
|
|
|
|
|
2009-12-08 11:14:20 +08:00
|
|
|
extern int trace_event_raw_init(struct ftrace_event_call *call);
|
2009-08-27 11:09:51 +08:00
|
|
|
extern int trace_define_field(struct ftrace_event_call *call, const char *type,
|
|
|
|
const char *name, int offset, int size,
|
|
|
|
int is_signed, int filter_type);
|
2009-08-14 04:34:53 +08:00
|
|
|
extern int trace_add_event_call(struct ftrace_event_call *call);
|
2013-07-30 01:50:33 +08:00
|
|
|
extern int trace_remove_event_call(struct ftrace_event_call *call);
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2013-04-20 05:10:27 +08:00
|
|
|
#define is_signed_type(type) (((type)(-1)) < (type)1)
|
2009-04-13 23:20:49 +08:00
|
|
|
|
2009-05-09 04:27:41 +08:00
|
|
|
int trace_set_clr_event(const char *system, const char *event, int set);
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
/*
|
|
|
|
* The double __builtin_constant_p is because gcc will give us an error
|
|
|
|
* if we try to allocate the static variable to fmt if it is not a
|
|
|
|
* constant. Even with the outer if statement optimizing out.
|
|
|
|
*/
|
|
|
|
#define event_trace_printk(ip, fmt, args...) \
|
|
|
|
do { \
|
|
|
|
__trace_printk_check_format(fmt, ##args); \
|
|
|
|
tracing_record_cmdline(current); \
|
|
|
|
if (__builtin_constant_p(fmt)) { \
|
|
|
|
static const char *trace_printk_fmt \
|
|
|
|
__attribute__((section("__trace_printk_fmt"))) = \
|
|
|
|
__builtin_constant_p(fmt) ? fmt : NULL; \
|
|
|
|
\
|
|
|
|
__trace_bprintk(ip, trace_printk_fmt, ##args); \
|
|
|
|
} else \
|
|
|
|
__trace_printk(ip, fmt, ##args); \
|
|
|
|
} while (0)
|
|
|
|
|
2013-07-13 05:07:27 +08:00
|
|
|
/**
|
|
|
|
* tracepoint_string - register constant persistent string to trace system
|
|
|
|
* @str - a constant persistent string that will be referenced in tracepoints
|
|
|
|
*
|
|
|
|
* If constant strings are being used in tracepoints, it is faster and
|
|
|
|
* more efficient to just save the pointer to the string and reference
|
|
|
|
* that with a printf "%s" instead of saving the string in the ring buffer
|
|
|
|
* and wasting space and time.
|
|
|
|
*
|
|
|
|
* The problem with the above approach is that userspace tools that read
|
|
|
|
* the binary output of the trace buffers do not have access to the string.
|
|
|
|
* Instead they just show the address of the string which is not very
|
|
|
|
* useful to users.
|
|
|
|
*
|
|
|
|
* With tracepoint_string(), the string will be registered to the tracing
|
|
|
|
* system and exported to userspace via the debugfs/tracing/printk_formats
|
|
|
|
* file that maps the string address to the string text. This way userspace
|
|
|
|
* tools that read the binary buffers have a way to map the pointers to
|
|
|
|
* the ASCII strings they represent.
|
|
|
|
*
|
|
|
|
* The @str used must be a constant string and persistent as it would not
|
|
|
|
* make sense to show a string that no longer exists. But it is still fine
|
|
|
|
* to be used with modules, because when modules are unloaded, if they
|
|
|
|
* had tracepoints, the ring buffers are cleared too. As long as the string
|
|
|
|
* does not change during the life of the module, it is fine to use
|
|
|
|
* tracepoint_string() within a module.
|
|
|
|
*/
|
|
|
|
#define tracepoint_string(str) \
|
|
|
|
({ \
|
|
|
|
static const char *___tp_str __tracepoint_string = str; \
|
|
|
|
___tp_str; \
|
|
|
|
})
|
|
|
|
#define __tracepoint_string __attribute__((section("__tracepoint_str")))
|
|
|
|
|
2009-12-21 14:27:35 +08:00
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
2009-10-15 11:21:42 +08:00
|
|
|
struct perf_event;
|
2010-03-03 14:16:16 +08:00
|
|
|
|
|
|
|
DECLARE_PER_CPU(struct pt_regs, perf_trace_regs);
|
|
|
|
|
2010-05-19 20:02:22 +08:00
|
|
|
extern int perf_trace_init(struct perf_event *event);
|
|
|
|
extern void perf_trace_destroy(struct perf_event *event);
|
perf: Rework the PMU methods
Replace pmu::{enable,disable,start,stop,unthrottle} with
pmu::{add,del,start,stop}, all of which take a flags argument.
The new interface extends the capability to stop a counter while
keeping it scheduled on the PMU. We replace the throttled state with
the generic stopped state.
This also allows us to efficiently stop/start counters over certain
code paths (like IRQ handlers).
It also allows scheduling a counter without it starting, allowing for
a generic frozen state (useful for rotating stopped counters).
The stopped state is implemented in two different ways, depending on
how the architecture implemented the throttled state:
1) We disable the counter:
a) the pmu has per-counter enable bits, we flip that
b) we program a NOP event, preserving the counter state
2) We store the counter state and ignore all read/overflow events
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
Cc: stephane eranian <eranian@googlemail.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Lin Ming <ming.m.lin@intel.com>
Cc: Yanmin <yanmin_zhang@linux.intel.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-06-16 20:37:10 +08:00
|
|
|
extern int perf_trace_add(struct perf_event *event, int flags);
|
|
|
|
extern void perf_trace_del(struct perf_event *event, int flags);
|
2010-05-19 20:02:22 +08:00
|
|
|
extern int ftrace_profile_set_filter(struct perf_event *event, int event_id,
|
2009-10-15 11:21:42 +08:00
|
|
|
char *filter_str);
|
|
|
|
extern void ftrace_profile_free_filter(struct perf_event *event);
|
2010-05-19 16:52:27 +08:00
|
|
|
extern void *perf_trace_buf_prepare(int size, unsigned short type,
|
|
|
|
struct pt_regs *regs, int *rctxp);
|
2010-01-28 09:32:29 +08:00
|
|
|
|
|
|
|
static inline void
|
2010-03-05 12:35:37 +08:00
|
|
|
perf_trace_buf_submit(void *raw_data, int size, int rctx, u64 addr,
|
2012-07-11 22:14:58 +08:00
|
|
|
u64 count, struct pt_regs *regs, void *head,
|
|
|
|
struct task_struct *task)
|
2010-01-28 09:32:29 +08:00
|
|
|
{
|
2012-07-11 22:14:58 +08:00
|
|
|
perf_tp_event(addr, count, raw_data, size, regs, head, rctx, task);
|
2010-01-28 09:32:29 +08:00
|
|
|
}
|
2009-10-15 11:21:42 +08:00
|
|
|
#endif
|
|
|
|
|
2009-04-13 23:20:49 +08:00
|
|
|
#endif /* _LINUX_FTRACE_EVENT_H */
|