tracing updates for 5.19:
- The majority of the changes are for fixes and clean ups. Noticeable changes: - Rework trace event triggers code to be easier to interact with. - Support for embedding bootconfig with the kernel (as suppose to having it embedded in initram). This is useful for embedded boards without initram disks. - Speed up boot by parallelizing the creation of tracefs files. - Allow absolute ring buffer timestamps handle timestamps that use more than 59 bits. - Added new tracing clock "TAI" (International Atomic Time) - Have weak functions show up in available_filter_function list as: __ftrace_invalid_address___<invalid-offset> instead of using the name of the function before it. -----BEGIN PGP SIGNATURE----- iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCYpOgXRQccm9zdGVkdEBn b29kbWlzLm9yZwAKCRAp5XQQmuv6qjkKAQDbpemxvpFyJlZqT8KgEIXubu+ag2/q p0XDHaPS0zF9OQEAjTxg6GMEbnFYl6fzxZtOoEbiaQ7ppfdhRI8t6sSMVA8= =+nDD -----END PGP SIGNATURE----- Merge tag 'trace-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The majority of the changes are for fixes and clean ups. Notable changes: - Rework trace event triggers code to be easier to interact with. - Support for embedding bootconfig with the kernel (as suppose to having it embedded in initram). This is useful for embedded boards without initram disks. - Speed up boot by parallelizing the creation of tracefs files. - Allow absolute ring buffer timestamps handle timestamps that use more than 59 bits. - Added new tracing clock "TAI" (International Atomic Time) - Have weak functions show up in available_filter_function list as: __ftrace_invalid_address___<invalid-offset> instead of using the name of the function before it" * tag 'trace-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (52 commits) ftrace: Add FTRACE_MCOUNT_MAX_OFFSET to avoid adding weak function tracing: Fix comments for event_trigger_separate_filter() x86/traceponit: Fix comment about irq vector tracepoints x86,tracing: Remove unused headers ftrace: Clean up hash direct_functions on register failures tracing: Fix comments of create_filter() tracing: Disable kcov on trace_preemptirq.c tracing: Initialize integer variable to prevent garbage return value ftrace: Fix typo in comment ftrace: Remove return value of ftrace_arch_modify_*() tracing: Cleanup code by removing init "char *name" tracing: Change "char *" string form to "char []" tracing/timerlat: Do not wakeup the thread if the trace stops at the IRQ tracing/timerlat: Print stacktrace in the IRQ handler if needed tracing/timerlat: Notify IRQ new max latency only if stop tracing is set kprobes: Fix build errors with CONFIG_KRETPROBES=n tracing: Fix return value of trace_pid_write() tracing: Fix potential double free in create_var_ref() tracing: Use strim() to remove whitespace instead of doing it manually ftrace: Deal with error return code of the ftrace_process_locs() function ...
This commit is contained in:
commit
76bfd3de34
|
@ -158,9 +158,15 @@ Each key-value pair is shown in each line with following style::
|
|||
Boot Kernel With a Boot Config
|
||||
==============================
|
||||
|
||||
Since the boot configuration file is loaded with initrd, it will be added
|
||||
to the end of the initrd (initramfs) image file with padding, size,
|
||||
checksum and 12-byte magic word as below.
|
||||
There are two options to boot the kernel with bootconfig: attaching the
|
||||
bootconfig to the initrd image or embedding it in the kernel itself.
|
||||
|
||||
Attaching a Boot Config to Initrd
|
||||
---------------------------------
|
||||
|
||||
Since the boot configuration file is loaded with initrd by default,
|
||||
it will be added to the end of the initrd (initramfs) image file with
|
||||
padding, size, checksum and 12-byte magic word as below.
|
||||
|
||||
[initrd][bootconfig][padding][size(le32)][checksum(le32)][#BOOTCONFIG\n]
|
||||
|
||||
|
@ -196,6 +202,25 @@ To remove the config from the image, you can use -d option as below::
|
|||
Then add "bootconfig" on the normal kernel command line to tell the
|
||||
kernel to look for the bootconfig at the end of the initrd file.
|
||||
|
||||
Embedding a Boot Config into Kernel
|
||||
-----------------------------------
|
||||
|
||||
If you can not use initrd, you can also embed the bootconfig file in the
|
||||
kernel by Kconfig options. In this case, you need to recompile the kernel
|
||||
with the following configs::
|
||||
|
||||
CONFIG_BOOT_CONFIG_EMBED=y
|
||||
CONFIG_BOOT_CONFIG_EMBED_FILE="/PATH/TO/BOOTCONFIG/FILE"
|
||||
|
||||
``CONFIG_BOOT_CONFIG_EMBED_FILE`` requires an absolute path or a relative
|
||||
path to the bootconfig file from source tree or object tree.
|
||||
The kernel will embed it as the default bootconfig.
|
||||
|
||||
Just as when attaching the bootconfig to the initrd, you need ``bootconfig``
|
||||
option on the kernel command line to enable the embedded bootconfig.
|
||||
|
||||
Note that even if you set this option, you can override the embedded
|
||||
bootconfig by another bootconfig which attached to the initrd.
|
||||
|
||||
Kernel parameters via Boot Config
|
||||
=================================
|
||||
|
|
|
@ -517,6 +517,18 @@ of ftrace. Here is a list of some of the key files:
|
|||
processing should be able to handle them. See comments in the
|
||||
ktime_get_boot_fast_ns() function for more information.
|
||||
|
||||
tai:
|
||||
This is the tai clock (CLOCK_TAI) and is derived from the wall-
|
||||
clock time. However, this clock does not experience
|
||||
discontinuities and backwards jumps caused by NTP inserting leap
|
||||
seconds. Since the clock access is designed for use in tracing,
|
||||
side effects are possible. The clock access may yield wrong
|
||||
readouts in case the internal TAI offset is updated e.g., caused
|
||||
by setting the system time or using adjtimex() with an offset.
|
||||
These effects are rare and post processing should be able to
|
||||
handle them. See comments in the ktime_get_tai_fast_ns()
|
||||
function for more information.
|
||||
|
||||
To set a clock, simply echo the clock name into this file::
|
||||
|
||||
# echo global > trace_clock
|
||||
|
|
|
@ -74,8 +74,9 @@ directory. The timerlat configs are:
|
|||
- stop_tracing_total_us: stop the system tracing if a
|
||||
timer latency at the *thread* context is higher than the configured
|
||||
value happens. Writing 0 disables this option.
|
||||
- print_stack: save the stack of the IRQ occurrence, and print
|
||||
it after the *thread context* event".
|
||||
- print_stack: save the stack of the IRQ occurrence. The stack is printed
|
||||
after the *thread context* event, or at the IRQ handler if *stop_tracing_us*
|
||||
is hit.
|
||||
|
||||
timerlat and osnoise
|
||||
----------------------------
|
||||
|
|
|
@ -7517,6 +7517,7 @@ S: Maintained
|
|||
F: Documentation/admin-guide/bootconfig.rst
|
||||
F: fs/proc/bootconfig.c
|
||||
F: include/linux/bootconfig.h
|
||||
F: lib/bootconfig-data.S
|
||||
F: lib/bootconfig.c
|
||||
F: tools/bootconfig/*
|
||||
F: tools/bootconfig/scripts/*
|
||||
|
@ -20119,8 +20120,8 @@ M: Ingo Molnar <mingo@redhat.com>
|
|||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
|
||||
F: Documentation/trace/ftrace.rst
|
||||
F: arch/*/*/*/ftrace.h
|
||||
F: arch/*/kernel/ftrace.c
|
||||
F: arch/*/*/*/*ftrace*
|
||||
F: arch/*/*/*ftrace*
|
||||
F: fs/tracefs/
|
||||
F: include/*/ftrace.h
|
||||
F: include/linux/trace*.h
|
||||
|
|
|
@ -79,16 +79,14 @@ static unsigned long __ref adjust_address(struct dyn_ftrace *rec,
|
|||
return (unsigned long)&ftrace_regs_caller_from_init;
|
||||
}
|
||||
|
||||
int ftrace_arch_code_modify_prepare(void)
|
||||
void ftrace_arch_code_modify_prepare(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ftrace_arch_code_modify_post_process(void)
|
||||
void ftrace_arch_code_modify_post_process(void)
|
||||
{
|
||||
/* Make sure any TLB misses during machine stop are cleared. */
|
||||
flush_tlb_all();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr,
|
||||
|
|
|
@ -12,16 +12,14 @@
|
|||
#include <asm/patch.h>
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
int ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
|
||||
void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
|
||||
{
|
||||
mutex_lock(&text_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
|
||||
void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
|
||||
{
|
||||
mutex_unlock(&text_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ftrace_check_current_call(unsigned long hook_pos,
|
||||
|
|
|
@ -225,14 +225,13 @@ void arch_ftrace_update_code(int command)
|
|||
ftrace_modify_all_code(command);
|
||||
}
|
||||
|
||||
int ftrace_arch_code_modify_post_process(void)
|
||||
void ftrace_arch_code_modify_post_process(void)
|
||||
{
|
||||
/*
|
||||
* Flush any pre-fetched instructions on all
|
||||
* CPUs to make the new code visible.
|
||||
*/
|
||||
text_poke_sync_lock();
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
|
|
@ -9,6 +9,13 @@
|
|||
# define MCOUNT_ADDR ((unsigned long)(__fentry__))
|
||||
#define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
|
||||
|
||||
/* Ignore unused weak functions which will have non zero offsets */
|
||||
#ifdef CONFIG_HAVE_FENTRY
|
||||
# include <asm/ibt.h>
|
||||
/* Add offset for endbr64 if IBT enabled */
|
||||
# define FTRACE_MCOUNT_MAX_OFFSET ENDBR_INSN_SIZE
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
#define ARCH_SUPPORTS_FTRACE_OPS 1
|
||||
#endif
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
|
||||
static int ftrace_poke_late = 0;
|
||||
|
||||
int ftrace_arch_code_modify_prepare(void)
|
||||
void ftrace_arch_code_modify_prepare(void)
|
||||
__acquires(&text_mutex)
|
||||
{
|
||||
/*
|
||||
|
@ -47,10 +47,9 @@ int ftrace_arch_code_modify_prepare(void)
|
|||
*/
|
||||
mutex_lock(&text_mutex);
|
||||
ftrace_poke_late = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ftrace_arch_code_modify_post_process(void)
|
||||
void ftrace_arch_code_modify_post_process(void)
|
||||
__releases(&text_mutex)
|
||||
{
|
||||
/*
|
||||
|
@ -61,7 +60,6 @@ int ftrace_arch_code_modify_post_process(void)
|
|||
text_poke_finish();
|
||||
ftrace_poke_late = 0;
|
||||
mutex_unlock(&text_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const char *ftrace_nop_replace(void)
|
||||
|
|
|
@ -1,17 +1,11 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Code for supporting irq vector tracepoints.
|
||||
*
|
||||
* Copyright (C) 2013 Seiji Aguchi <seiji.aguchi@hds.com>
|
||||
*
|
||||
*/
|
||||
#include <linux/jump_label.h>
|
||||
#include <linux/atomic.h>
|
||||
|
||||
#include <asm/hw_irq.h>
|
||||
#include <asm/desc.h>
|
||||
#include <asm/trace/exceptions.h>
|
||||
#include <asm/trace/irq_vectors.h>
|
||||
|
||||
DEFINE_STATIC_KEY_FALSE(trace_pagefault_key);
|
||||
|
||||
|
|
|
@ -289,4 +289,14 @@ int __init xbc_get_info(int *node_size, size_t *data_size);
|
|||
/* XBC cleanup data structures */
|
||||
void __init xbc_exit(void);
|
||||
|
||||
/* XBC embedded bootconfig data in kernel */
|
||||
#ifdef CONFIG_BOOT_CONFIG_EMBED
|
||||
const char * __init xbc_get_embedded_bootconfig(size_t *size);
|
||||
#else
|
||||
static inline const char *xbc_get_embedded_bootconfig(size_t *size)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
@ -452,8 +452,8 @@ static inline void stack_tracer_enable(void) { }
|
|||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
|
||||
int ftrace_arch_code_modify_prepare(void);
|
||||
int ftrace_arch_code_modify_post_process(void);
|
||||
void ftrace_arch_code_modify_prepare(void);
|
||||
void ftrace_arch_code_modify_post_process(void);
|
||||
|
||||
enum ftrace_bug_type {
|
||||
FTRACE_BUG_UNKNOWN,
|
||||
|
|
|
@ -424,7 +424,7 @@ void unregister_kretprobe(struct kretprobe *rp);
|
|||
int register_kretprobes(struct kretprobe **rps, int num);
|
||||
void unregister_kretprobes(struct kretprobe **rps, int num);
|
||||
|
||||
#ifdef CONFIG_KRETPROBE_ON_RETHOOK
|
||||
#if defined(CONFIG_KRETPROBE_ON_RETHOOK) || !defined(CONFIG_KRETPROBES)
|
||||
#define kprobe_flush_task(tk) do {} while (0)
|
||||
#else
|
||||
void kprobe_flush_task(struct task_struct *tk);
|
||||
|
|
21
init/Kconfig
21
init/Kconfig
|
@ -1338,7 +1338,7 @@ endif
|
|||
|
||||
config BOOT_CONFIG
|
||||
bool "Boot config support"
|
||||
select BLK_DEV_INITRD
|
||||
select BLK_DEV_INITRD if !BOOT_CONFIG_EMBED
|
||||
help
|
||||
Extra boot config allows system admin to pass a config file as
|
||||
complemental extension of kernel cmdline when booting.
|
||||
|
@ -1348,6 +1348,25 @@ config BOOT_CONFIG
|
|||
|
||||
If unsure, say Y.
|
||||
|
||||
config BOOT_CONFIG_EMBED
|
||||
bool "Embed bootconfig file in the kernel"
|
||||
depends on BOOT_CONFIG
|
||||
help
|
||||
Embed a bootconfig file given by BOOT_CONFIG_EMBED_FILE in the
|
||||
kernel. Usually, the bootconfig file is loaded with the initrd
|
||||
image. But if the system doesn't support initrd, this option will
|
||||
help you by embedding a bootconfig file while building the kernel.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config BOOT_CONFIG_EMBED_FILE
|
||||
string "Embedded bootconfig file path"
|
||||
depends on BOOT_CONFIG_EMBED
|
||||
help
|
||||
Specify a bootconfig file which will be embedded to the kernel.
|
||||
This bootconfig will be used if there is no initrd or no other
|
||||
bootconfig in the initrd.
|
||||
|
||||
config INITRAMFS_PRESERVE_MTIME
|
||||
bool "Preserve cpio archive mtimes in initramfs"
|
||||
default y
|
||||
|
|
38
init/main.c
38
init/main.c
|
@ -266,7 +266,7 @@ static int __init loglevel(char *str)
|
|||
early_param("loglevel", loglevel);
|
||||
|
||||
#ifdef CONFIG_BLK_DEV_INITRD
|
||||
static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)
|
||||
static void * __init get_boot_config_from_initrd(size_t *_size)
|
||||
{
|
||||
u32 size, csum;
|
||||
char *data;
|
||||
|
@ -300,17 +300,20 @@ static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
if (xbc_calc_checksum(data, size) != csum) {
|
||||
pr_err("bootconfig checksum failed\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Remove bootconfig from initramfs/initrd */
|
||||
initrd_end = (unsigned long)data;
|
||||
if (_size)
|
||||
*_size = size;
|
||||
if (_csum)
|
||||
*_csum = csum;
|
||||
|
||||
return data;
|
||||
}
|
||||
#else
|
||||
static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)
|
||||
static void * __init get_boot_config_from_initrd(size_t *_size)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
@ -407,14 +410,16 @@ static int __init warn_bootconfig(char *str)
|
|||
static void __init setup_boot_config(void)
|
||||
{
|
||||
static char tmp_cmdline[COMMAND_LINE_SIZE] __initdata;
|
||||
const char *msg;
|
||||
int pos;
|
||||
u32 size, csum;
|
||||
char *data, *err;
|
||||
int ret;
|
||||
const char *msg, *data;
|
||||
int pos, ret;
|
||||
size_t size;
|
||||
char *err;
|
||||
|
||||
/* Cut out the bootconfig data even if we have no bootconfig option */
|
||||
data = get_boot_config_from_initrd(&size, &csum);
|
||||
data = get_boot_config_from_initrd(&size);
|
||||
/* If there is no bootconfig in initrd, try embedded one. */
|
||||
if (!data)
|
||||
data = xbc_get_embedded_bootconfig(&size);
|
||||
|
||||
strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
|
||||
err = parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
|
||||
|
@ -433,13 +438,8 @@ static void __init setup_boot_config(void)
|
|||
}
|
||||
|
||||
if (size >= XBC_DATA_MAX) {
|
||||
pr_err("bootconfig size %d greater than max size %d\n",
|
||||
size, XBC_DATA_MAX);
|
||||
return;
|
||||
}
|
||||
|
||||
if (xbc_calc_checksum(data, size) != csum) {
|
||||
pr_err("bootconfig checksum failed\n");
|
||||
pr_err("bootconfig size %ld greater than max size %d\n",
|
||||
(long)size, XBC_DATA_MAX);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -452,7 +452,7 @@ static void __init setup_boot_config(void)
|
|||
msg, pos);
|
||||
} else {
|
||||
xbc_get_info(&ret, NULL);
|
||||
pr_info("Load bootconfig: %d bytes %d nodes\n", size, ret);
|
||||
pr_info("Load bootconfig: %ld bytes %d nodes\n", (long)size, ret);
|
||||
/* keys starting with "kernel." are passed via cmdline */
|
||||
extra_command_line = xbc_make_cmdline("kernel");
|
||||
/* Also, "init." keys are init arguments */
|
||||
|
@ -471,7 +471,7 @@ static void __init exit_boot_config(void)
|
|||
static void __init setup_boot_config(void)
|
||||
{
|
||||
/* Remove bootconfig data from initrd */
|
||||
get_boot_config_from_initrd(NULL, NULL);
|
||||
get_boot_config_from_initrd(NULL);
|
||||
}
|
||||
|
||||
static int __init warn_bootconfig(char *str)
|
||||
|
|
144
kernel/kprobes.c
144
kernel/kprobes.c
|
@ -1257,79 +1257,6 @@ void kprobe_busy_end(void)
|
|||
preempt_enable();
|
||||
}
|
||||
|
||||
#if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
|
||||
static void free_rp_inst_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu);
|
||||
|
||||
if (refcount_dec_and_test(&ri->rph->ref))
|
||||
kfree(ri->rph);
|
||||
kfree(ri);
|
||||
}
|
||||
NOKPROBE_SYMBOL(free_rp_inst_rcu);
|
||||
|
||||
static void recycle_rp_inst(struct kretprobe_instance *ri)
|
||||
{
|
||||
struct kretprobe *rp = get_kretprobe(ri);
|
||||
|
||||
if (likely(rp))
|
||||
freelist_add(&ri->freelist, &rp->freelist);
|
||||
else
|
||||
call_rcu(&ri->rcu, free_rp_inst_rcu);
|
||||
}
|
||||
NOKPROBE_SYMBOL(recycle_rp_inst);
|
||||
|
||||
/*
|
||||
* This function is called from delayed_put_task_struct() when a task is
|
||||
* dead and cleaned up to recycle any kretprobe instances associated with
|
||||
* this task. These left over instances represent probed functions that
|
||||
* have been called but will never return.
|
||||
*/
|
||||
void kprobe_flush_task(struct task_struct *tk)
|
||||
{
|
||||
struct kretprobe_instance *ri;
|
||||
struct llist_node *node;
|
||||
|
||||
/* Early boot, not yet initialized. */
|
||||
if (unlikely(!kprobes_initialized))
|
||||
return;
|
||||
|
||||
kprobe_busy_begin();
|
||||
|
||||
node = __llist_del_all(&tk->kretprobe_instances);
|
||||
while (node) {
|
||||
ri = container_of(node, struct kretprobe_instance, llist);
|
||||
node = node->next;
|
||||
|
||||
recycle_rp_inst(ri);
|
||||
}
|
||||
|
||||
kprobe_busy_end();
|
||||
}
|
||||
NOKPROBE_SYMBOL(kprobe_flush_task);
|
||||
|
||||
static inline void free_rp_inst(struct kretprobe *rp)
|
||||
{
|
||||
struct kretprobe_instance *ri;
|
||||
struct freelist_node *node;
|
||||
int count = 0;
|
||||
|
||||
node = rp->freelist.head;
|
||||
while (node) {
|
||||
ri = container_of(node, struct kretprobe_instance, freelist);
|
||||
node = node->next;
|
||||
|
||||
kfree(ri);
|
||||
count++;
|
||||
}
|
||||
|
||||
if (refcount_sub_and_test(count, &rp->rph->ref)) {
|
||||
kfree(rp->rph);
|
||||
rp->rph = NULL;
|
||||
}
|
||||
}
|
||||
#endif /* !CONFIG_KRETPROBE_ON_RETHOOK */
|
||||
|
||||
/* Add the new probe to 'ap->list'. */
|
||||
static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
|
||||
{
|
||||
|
@ -1928,6 +1855,77 @@ static struct notifier_block kprobe_exceptions_nb = {
|
|||
#ifdef CONFIG_KRETPROBES
|
||||
|
||||
#if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
|
||||
static void free_rp_inst_rcu(struct rcu_head *head)
|
||||
{
|
||||
struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu);
|
||||
|
||||
if (refcount_dec_and_test(&ri->rph->ref))
|
||||
kfree(ri->rph);
|
||||
kfree(ri);
|
||||
}
|
||||
NOKPROBE_SYMBOL(free_rp_inst_rcu);
|
||||
|
||||
static void recycle_rp_inst(struct kretprobe_instance *ri)
|
||||
{
|
||||
struct kretprobe *rp = get_kretprobe(ri);
|
||||
|
||||
if (likely(rp))
|
||||
freelist_add(&ri->freelist, &rp->freelist);
|
||||
else
|
||||
call_rcu(&ri->rcu, free_rp_inst_rcu);
|
||||
}
|
||||
NOKPROBE_SYMBOL(recycle_rp_inst);
|
||||
|
||||
/*
|
||||
* This function is called from delayed_put_task_struct() when a task is
|
||||
* dead and cleaned up to recycle any kretprobe instances associated with
|
||||
* this task. These left over instances represent probed functions that
|
||||
* have been called but will never return.
|
||||
*/
|
||||
void kprobe_flush_task(struct task_struct *tk)
|
||||
{
|
||||
struct kretprobe_instance *ri;
|
||||
struct llist_node *node;
|
||||
|
||||
/* Early boot, not yet initialized. */
|
||||
if (unlikely(!kprobes_initialized))
|
||||
return;
|
||||
|
||||
kprobe_busy_begin();
|
||||
|
||||
node = __llist_del_all(&tk->kretprobe_instances);
|
||||
while (node) {
|
||||
ri = container_of(node, struct kretprobe_instance, llist);
|
||||
node = node->next;
|
||||
|
||||
recycle_rp_inst(ri);
|
||||
}
|
||||
|
||||
kprobe_busy_end();
|
||||
}
|
||||
NOKPROBE_SYMBOL(kprobe_flush_task);
|
||||
|
||||
static inline void free_rp_inst(struct kretprobe *rp)
|
||||
{
|
||||
struct kretprobe_instance *ri;
|
||||
struct freelist_node *node;
|
||||
int count = 0;
|
||||
|
||||
node = rp->freelist.head;
|
||||
while (node) {
|
||||
ri = container_of(node, struct kretprobe_instance, freelist);
|
||||
node = node->next;
|
||||
|
||||
kfree(ri);
|
||||
count++;
|
||||
}
|
||||
|
||||
if (refcount_sub_and_test(count, &rp->rph->ref)) {
|
||||
kfree(rp->rph);
|
||||
rp->rph = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
/* This assumes the 'tsk' is the current task or the is not running. */
|
||||
static kprobe_opcode_t *__kretprobe_find_ret_addr(struct task_struct *tsk,
|
||||
struct llist_node **cur)
|
||||
|
|
|
@ -31,6 +31,10 @@ ifdef CONFIG_GCOV_PROFILE_FTRACE
|
|||
GCOV_PROFILE := y
|
||||
endif
|
||||
|
||||
# Functions in this file could be invoked from early interrupt
|
||||
# code and produce random code coverage.
|
||||
KCOV_INSTRUMENT_trace_preemptirq.o := n
|
||||
|
||||
CFLAGS_bpf_trace.o := -I$(src)
|
||||
|
||||
CFLAGS_trace_benchmark.o := -I$(src)
|
||||
|
|
|
@ -45,6 +45,8 @@
|
|||
#include "trace_output.h"
|
||||
#include "trace_stat.h"
|
||||
|
||||
#define FTRACE_INVALID_FUNCTION "__ftrace_invalid_address__"
|
||||
|
||||
#define FTRACE_WARN_ON(cond) \
|
||||
({ \
|
||||
int ___r = cond; \
|
||||
|
@ -119,7 +121,7 @@ struct ftrace_ops __rcu *ftrace_ops_list __read_mostly = &ftrace_list_end;
|
|||
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
|
||||
struct ftrace_ops global_ops;
|
||||
|
||||
/* Defined by vmlinux.lds.h see the commment above arch_ftrace_ops_list_func for details */
|
||||
/* Defined by vmlinux.lds.h see the comment above arch_ftrace_ops_list_func for details */
|
||||
void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct ftrace_regs *fregs);
|
||||
|
||||
|
@ -952,7 +954,6 @@ static struct tracer_stat function_stats __initdata = {
|
|||
static __init void ftrace_profile_tracefs(struct dentry *d_tracer)
|
||||
{
|
||||
struct ftrace_profile_stat *stat;
|
||||
struct dentry *entry;
|
||||
char *name;
|
||||
int ret;
|
||||
int cpu;
|
||||
|
@ -983,11 +984,9 @@ static __init void ftrace_profile_tracefs(struct dentry *d_tracer)
|
|||
}
|
||||
}
|
||||
|
||||
entry = tracefs_create_file("function_profile_enabled",
|
||||
TRACE_MODE_WRITE, d_tracer, NULL,
|
||||
&ftrace_profile_fops);
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'function_profile_enabled' entry\n");
|
||||
trace_create_file("function_profile_enabled",
|
||||
TRACE_MODE_WRITE, d_tracer, NULL,
|
||||
&ftrace_profile_fops);
|
||||
}
|
||||
|
||||
#else /* CONFIG_FUNCTION_PROFILER */
|
||||
|
@ -2707,18 +2706,16 @@ ftrace_nop_initialize(struct module *mod, struct dyn_ftrace *rec)
|
|||
* archs can override this function if they must do something
|
||||
* before the modifying code is performed.
|
||||
*/
|
||||
int __weak ftrace_arch_code_modify_prepare(void)
|
||||
void __weak ftrace_arch_code_modify_prepare(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* archs can override this function if they must do something
|
||||
* after the modifying code is performed.
|
||||
*/
|
||||
int __weak ftrace_arch_code_modify_post_process(void)
|
||||
void __weak ftrace_arch_code_modify_post_process(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ftrace_modify_all_code(int command)
|
||||
|
@ -2804,12 +2801,7 @@ void __weak arch_ftrace_update_code(int command)
|
|||
|
||||
static void ftrace_run_update_code(int command)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ftrace_arch_code_modify_prepare();
|
||||
FTRACE_WARN_ON(ret);
|
||||
if (ret)
|
||||
return;
|
||||
ftrace_arch_code_modify_prepare();
|
||||
|
||||
/*
|
||||
* By default we use stop_machine() to modify the code.
|
||||
|
@ -2819,8 +2811,7 @@ static void ftrace_run_update_code(int command)
|
|||
*/
|
||||
arch_ftrace_update_code(command);
|
||||
|
||||
ret = ftrace_arch_code_modify_post_process();
|
||||
FTRACE_WARN_ON(ret);
|
||||
ftrace_arch_code_modify_post_process();
|
||||
}
|
||||
|
||||
static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
|
||||
|
@ -3631,6 +3622,105 @@ static void add_trampoline_func(struct seq_file *m, struct ftrace_ops *ops,
|
|||
seq_printf(m, " ->%pS", ptr);
|
||||
}
|
||||
|
||||
#ifdef FTRACE_MCOUNT_MAX_OFFSET
|
||||
/*
|
||||
* Weak functions can still have an mcount/fentry that is saved in
|
||||
* the __mcount_loc section. These can be detected by having a
|
||||
* symbol offset of greater than FTRACE_MCOUNT_MAX_OFFSET, as the
|
||||
* symbol found by kallsyms is not the function that the mcount/fentry
|
||||
* is part of. The offset is much greater in these cases.
|
||||
*
|
||||
* Test the record to make sure that the ip points to a valid kallsyms
|
||||
* and if not, mark it disabled.
|
||||
*/
|
||||
static int test_for_valid_rec(struct dyn_ftrace *rec)
|
||||
{
|
||||
char str[KSYM_SYMBOL_LEN];
|
||||
unsigned long offset;
|
||||
const char *ret;
|
||||
|
||||
ret = kallsyms_lookup(rec->ip, NULL, &offset, NULL, str);
|
||||
|
||||
/* Weak functions can cause invalid addresses */
|
||||
if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) {
|
||||
rec->flags |= FTRACE_FL_DISABLED;
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct workqueue_struct *ftrace_check_wq __initdata;
|
||||
static struct work_struct ftrace_check_work __initdata;
|
||||
|
||||
/*
|
||||
* Scan all the mcount/fentry entries to make sure they are valid.
|
||||
*/
|
||||
static __init void ftrace_check_work_func(struct work_struct *work)
|
||||
{
|
||||
struct ftrace_page *pg;
|
||||
struct dyn_ftrace *rec;
|
||||
|
||||
mutex_lock(&ftrace_lock);
|
||||
do_for_each_ftrace_rec(pg, rec) {
|
||||
test_for_valid_rec(rec);
|
||||
} while_for_each_ftrace_rec();
|
||||
mutex_unlock(&ftrace_lock);
|
||||
}
|
||||
|
||||
static int __init ftrace_check_for_weak_functions(void)
|
||||
{
|
||||
INIT_WORK(&ftrace_check_work, ftrace_check_work_func);
|
||||
|
||||
ftrace_check_wq = alloc_workqueue("ftrace_check_wq", WQ_UNBOUND, 0);
|
||||
|
||||
queue_work(ftrace_check_wq, &ftrace_check_work);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init ftrace_check_sync(void)
|
||||
{
|
||||
/* Make sure the ftrace_check updates are finished */
|
||||
if (ftrace_check_wq)
|
||||
destroy_workqueue(ftrace_check_wq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
late_initcall_sync(ftrace_check_sync);
|
||||
subsys_initcall(ftrace_check_for_weak_functions);
|
||||
|
||||
static int print_rec(struct seq_file *m, unsigned long ip)
|
||||
{
|
||||
unsigned long offset;
|
||||
char str[KSYM_SYMBOL_LEN];
|
||||
char *modname;
|
||||
const char *ret;
|
||||
|
||||
ret = kallsyms_lookup(ip, NULL, &offset, &modname, str);
|
||||
/* Weak functions can cause invalid addresses */
|
||||
if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) {
|
||||
snprintf(str, KSYM_SYMBOL_LEN, "%s_%ld",
|
||||
FTRACE_INVALID_FUNCTION, offset);
|
||||
ret = NULL;
|
||||
}
|
||||
|
||||
seq_puts(m, str);
|
||||
if (modname)
|
||||
seq_printf(m, " [%s]", modname);
|
||||
return ret == NULL ? -1 : 0;
|
||||
}
|
||||
#else
|
||||
static inline int test_for_valid_rec(struct dyn_ftrace *rec)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
static inline int print_rec(struct seq_file *m, unsigned long ip)
|
||||
{
|
||||
seq_printf(m, "%ps", (void *)ip);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int t_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct ftrace_iterator *iter = m->private;
|
||||
|
@ -3655,7 +3745,13 @@ static int t_show(struct seq_file *m, void *v)
|
|||
if (!rec)
|
||||
return 0;
|
||||
|
||||
seq_printf(m, "%ps", (void *)rec->ip);
|
||||
if (print_rec(m, rec->ip)) {
|
||||
/* This should only happen when a rec is disabled */
|
||||
WARN_ON_ONCE(!(rec->flags & FTRACE_FL_DISABLED));
|
||||
seq_putc(m, '\n');
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (iter->flags & FTRACE_ITER_ENABLED) {
|
||||
struct ftrace_ops *ops;
|
||||
|
||||
|
@ -3973,6 +4069,24 @@ add_rec_by_index(struct ftrace_hash *hash, struct ftrace_glob *func_g,
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef FTRACE_MCOUNT_MAX_OFFSET
|
||||
static int lookup_ip(unsigned long ip, char **modname, char *str)
|
||||
{
|
||||
unsigned long offset;
|
||||
|
||||
kallsyms_lookup(ip, NULL, &offset, modname, str);
|
||||
if (offset > FTRACE_MCOUNT_MAX_OFFSET)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
static int lookup_ip(unsigned long ip, char **modname, char *str)
|
||||
{
|
||||
kallsyms_lookup(ip, NULL, NULL, modname, str);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int
|
||||
ftrace_match_record(struct dyn_ftrace *rec, struct ftrace_glob *func_g,
|
||||
struct ftrace_glob *mod_g, int exclude_mod)
|
||||
|
@ -3980,7 +4094,12 @@ ftrace_match_record(struct dyn_ftrace *rec, struct ftrace_glob *func_g,
|
|||
char str[KSYM_SYMBOL_LEN];
|
||||
char *modname;
|
||||
|
||||
kallsyms_lookup(rec->ip, NULL, NULL, &modname, str);
|
||||
if (lookup_ip(rec->ip, &modname, str)) {
|
||||
/* This should only happen when a rec is disabled */
|
||||
WARN_ON_ONCE(system_state == SYSTEM_RUNNING &&
|
||||
!(rec->flags & FTRACE_FL_DISABLED));
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (mod_g) {
|
||||
int mod_matches = (modname) ? ftrace_match(modname, mod_g) : 0;
|
||||
|
@ -4431,7 +4550,7 @@ int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper,
|
|||
* @ip: The instruction pointer address to remove the data from
|
||||
*
|
||||
* Returns the data if it is found, otherwise NULL.
|
||||
* Note, if the data pointer is used as the data itself, (see
|
||||
* Note, if the data pointer is used as the data itself, (see
|
||||
* ftrace_func_mapper_find_ip(), then the return value may be meaningless,
|
||||
* if the data pointer was set to zero.
|
||||
*/
|
||||
|
@ -4526,8 +4645,8 @@ register_ftrace_function_probe(char *glob, struct trace_array *tr,
|
|||
struct ftrace_probe_ops *probe_ops,
|
||||
void *data)
|
||||
{
|
||||
struct ftrace_func_probe *probe = NULL, *iter;
|
||||
struct ftrace_func_entry *entry;
|
||||
struct ftrace_func_probe *probe;
|
||||
struct ftrace_hash **orig_hash;
|
||||
struct ftrace_hash *old_hash;
|
||||
struct ftrace_hash *hash;
|
||||
|
@ -4546,11 +4665,13 @@ register_ftrace_function_probe(char *glob, struct trace_array *tr,
|
|||
|
||||
mutex_lock(&ftrace_lock);
|
||||
/* Check if the probe_ops is already registered */
|
||||
list_for_each_entry(probe, &tr->func_probes, list) {
|
||||
if (probe->probe_ops == probe_ops)
|
||||
list_for_each_entry(iter, &tr->func_probes, list) {
|
||||
if (iter->probe_ops == probe_ops) {
|
||||
probe = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (&probe->list == &tr->func_probes) {
|
||||
if (!probe) {
|
||||
probe = kzalloc(sizeof(*probe), GFP_KERNEL);
|
||||
if (!probe) {
|
||||
mutex_unlock(&ftrace_lock);
|
||||
|
@ -4668,9 +4789,9 @@ int
|
|||
unregister_ftrace_function_probe_func(char *glob, struct trace_array *tr,
|
||||
struct ftrace_probe_ops *probe_ops)
|
||||
{
|
||||
struct ftrace_func_probe *probe = NULL, *iter;
|
||||
struct ftrace_ops_hash old_hash_ops;
|
||||
struct ftrace_func_entry *entry;
|
||||
struct ftrace_func_probe *probe;
|
||||
struct ftrace_glob func_g;
|
||||
struct ftrace_hash **orig_hash;
|
||||
struct ftrace_hash *old_hash;
|
||||
|
@ -4698,11 +4819,13 @@ unregister_ftrace_function_probe_func(char *glob, struct trace_array *tr,
|
|||
|
||||
mutex_lock(&ftrace_lock);
|
||||
/* Check if the probe_ops is already registered */
|
||||
list_for_each_entry(probe, &tr->func_probes, list) {
|
||||
if (probe->probe_ops == probe_ops)
|
||||
list_for_each_entry(iter, &tr->func_probes, list) {
|
||||
if (iter->probe_ops == probe_ops) {
|
||||
probe = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (&probe->list == &tr->func_probes)
|
||||
if (!probe)
|
||||
goto err_unlock_ftrace;
|
||||
|
||||
ret = -EINVAL;
|
||||
|
@ -5161,8 +5284,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
|
|||
goto out_unlock;
|
||||
|
||||
ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
|
||||
if (ret)
|
||||
remove_hash_entry(direct_functions, entry);
|
||||
|
||||
if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) {
|
||||
ret = register_ftrace_function(&direct_ops);
|
||||
|
@ -5171,6 +5292,7 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
|
|||
}
|
||||
|
||||
if (ret) {
|
||||
remove_hash_entry(direct_functions, entry);
|
||||
kfree(entry);
|
||||
if (!direct->count) {
|
||||
list_del_rcu(&direct->next);
|
||||
|
@ -6793,6 +6915,13 @@ void ftrace_module_enable(struct module *mod)
|
|||
!within_module_init(rec->ip, mod))
|
||||
break;
|
||||
|
||||
/* Weak functions should still be ignored */
|
||||
if (!test_for_valid_rec(rec)) {
|
||||
/* Clear all other flags. Should not be enabled anyway */
|
||||
rec->flags = FTRACE_FL_DISABLED;
|
||||
continue;
|
||||
}
|
||||
|
||||
cnt = 0;
|
||||
|
||||
/*
|
||||
|
@ -6829,11 +6958,16 @@ void ftrace_module_enable(struct module *mod)
|
|||
|
||||
void ftrace_module_init(struct module *mod)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ftrace_disabled || !mod->num_ftrace_callsites)
|
||||
return;
|
||||
|
||||
ftrace_process_locs(mod, mod->ftrace_callsites,
|
||||
mod->ftrace_callsites + mod->num_ftrace_callsites);
|
||||
ret = ftrace_process_locs(mod, mod->ftrace_callsites,
|
||||
mod->ftrace_callsites + mod->num_ftrace_callsites);
|
||||
if (ret)
|
||||
pr_warn("ftrace: failed to allocate entries for module '%s' functions\n",
|
||||
mod->name);
|
||||
}
|
||||
|
||||
static void save_ftrace_mod_rec(struct ftrace_mod_map *mod_map,
|
||||
|
@ -7166,15 +7300,19 @@ void __init ftrace_init(void)
|
|||
pr_info("ftrace: allocating %ld entries in %ld pages\n",
|
||||
count, count / ENTRIES_PER_PAGE + 1);
|
||||
|
||||
last_ftrace_enabled = ftrace_enabled = 1;
|
||||
|
||||
ret = ftrace_process_locs(NULL,
|
||||
__start_mcount_loc,
|
||||
__stop_mcount_loc);
|
||||
if (ret) {
|
||||
pr_warn("ftrace: failed to allocate entries for functions\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
pr_info("ftrace: allocated %ld pages with %ld groups\n",
|
||||
ftrace_number_of_pages, ftrace_number_of_groups);
|
||||
|
||||
last_ftrace_enabled = ftrace_enabled = 1;
|
||||
|
||||
set_ftrace_early_filters();
|
||||
|
||||
return;
|
||||
|
|
|
@ -118,9 +118,9 @@ static inline unsigned int pid_join(unsigned int upper1,
|
|||
/**
|
||||
* trace_pid_list_is_set - test if the pid is set in the list
|
||||
* @pid_list: The pid list to test
|
||||
* @pid: The pid to to see if set in the list.
|
||||
* @pid: The pid to see if set in the list.
|
||||
*
|
||||
* Tests if @pid is is set in the @pid_list. This is usually called
|
||||
* Tests if @pid is set in the @pid_list. This is usually called
|
||||
* from the scheduler when a task is scheduled. Its pid is checked
|
||||
* if it should be traced or not.
|
||||
*
|
||||
|
|
|
@ -29,6 +29,14 @@
|
|||
|
||||
#include <asm/local.h>
|
||||
|
||||
/*
|
||||
* The "absolute" timestamp in the buffer is only 59 bits.
|
||||
* If a clock has the 5 MSBs set, it needs to be saved and
|
||||
* reinserted.
|
||||
*/
|
||||
#define TS_MSB (0xf8ULL << 56)
|
||||
#define ABS_TS_MASK (~TS_MSB)
|
||||
|
||||
static void update_pages_handler(struct work_struct *work);
|
||||
|
||||
/*
|
||||
|
@ -468,6 +476,7 @@ struct rb_time_struct {
|
|||
local_t cnt;
|
||||
local_t top;
|
||||
local_t bottom;
|
||||
local_t msb;
|
||||
};
|
||||
#else
|
||||
#include <asm/local64.h>
|
||||
|
@ -569,7 +578,6 @@ struct ring_buffer_iter {
|
|||
* For the ring buffer, 64 bit required operations for the time is
|
||||
* the following:
|
||||
*
|
||||
* - Only need 59 bits (uses 60 to make it even).
|
||||
* - Reads may fail if it interrupted a modification of the time stamp.
|
||||
* It will succeed if it did not interrupt another write even if
|
||||
* the read itself is interrupted by a write.
|
||||
|
@ -594,6 +602,7 @@ struct ring_buffer_iter {
|
|||
*/
|
||||
#define RB_TIME_SHIFT 30
|
||||
#define RB_TIME_VAL_MASK ((1 << RB_TIME_SHIFT) - 1)
|
||||
#define RB_TIME_MSB_SHIFT 60
|
||||
|
||||
static inline int rb_time_cnt(unsigned long val)
|
||||
{
|
||||
|
@ -613,7 +622,7 @@ static inline u64 rb_time_val(unsigned long top, unsigned long bottom)
|
|||
|
||||
static inline bool __rb_time_read(rb_time_t *t, u64 *ret, unsigned long *cnt)
|
||||
{
|
||||
unsigned long top, bottom;
|
||||
unsigned long top, bottom, msb;
|
||||
unsigned long c;
|
||||
|
||||
/*
|
||||
|
@ -625,6 +634,7 @@ static inline bool __rb_time_read(rb_time_t *t, u64 *ret, unsigned long *cnt)
|
|||
c = local_read(&t->cnt);
|
||||
top = local_read(&t->top);
|
||||
bottom = local_read(&t->bottom);
|
||||
msb = local_read(&t->msb);
|
||||
} while (c != local_read(&t->cnt));
|
||||
|
||||
*cnt = rb_time_cnt(top);
|
||||
|
@ -633,7 +643,8 @@ static inline bool __rb_time_read(rb_time_t *t, u64 *ret, unsigned long *cnt)
|
|||
if (*cnt != rb_time_cnt(bottom))
|
||||
return false;
|
||||
|
||||
*ret = rb_time_val(top, bottom);
|
||||
/* The shift to msb will lose its cnt bits */
|
||||
*ret = rb_time_val(top, bottom) | ((u64)msb << RB_TIME_MSB_SHIFT);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -649,10 +660,12 @@ static inline unsigned long rb_time_val_cnt(unsigned long val, unsigned long cnt
|
|||
return (val & RB_TIME_VAL_MASK) | ((cnt & 3) << RB_TIME_SHIFT);
|
||||
}
|
||||
|
||||
static inline void rb_time_split(u64 val, unsigned long *top, unsigned long *bottom)
|
||||
static inline void rb_time_split(u64 val, unsigned long *top, unsigned long *bottom,
|
||||
unsigned long *msb)
|
||||
{
|
||||
*top = (unsigned long)((val >> RB_TIME_SHIFT) & RB_TIME_VAL_MASK);
|
||||
*bottom = (unsigned long)(val & RB_TIME_VAL_MASK);
|
||||
*msb = (unsigned long)(val >> RB_TIME_MSB_SHIFT);
|
||||
}
|
||||
|
||||
static inline void rb_time_val_set(local_t *t, unsigned long val, unsigned long cnt)
|
||||
|
@ -663,15 +676,16 @@ static inline void rb_time_val_set(local_t *t, unsigned long val, unsigned long
|
|||
|
||||
static void rb_time_set(rb_time_t *t, u64 val)
|
||||
{
|
||||
unsigned long cnt, top, bottom;
|
||||
unsigned long cnt, top, bottom, msb;
|
||||
|
||||
rb_time_split(val, &top, &bottom);
|
||||
rb_time_split(val, &top, &bottom, &msb);
|
||||
|
||||
/* Writes always succeed with a valid number even if it gets interrupted. */
|
||||
do {
|
||||
cnt = local_inc_return(&t->cnt);
|
||||
rb_time_val_set(&t->top, top, cnt);
|
||||
rb_time_val_set(&t->bottom, bottom, cnt);
|
||||
rb_time_val_set(&t->msb, val >> RB_TIME_MSB_SHIFT, cnt);
|
||||
} while (cnt != local_read(&t->cnt));
|
||||
}
|
||||
|
||||
|
@ -686,8 +700,8 @@ rb_time_read_cmpxchg(local_t *l, unsigned long expect, unsigned long set)
|
|||
|
||||
static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
|
||||
{
|
||||
unsigned long cnt, top, bottom;
|
||||
unsigned long cnt2, top2, bottom2;
|
||||
unsigned long cnt, top, bottom, msb;
|
||||
unsigned long cnt2, top2, bottom2, msb2;
|
||||
u64 val;
|
||||
|
||||
/* The cmpxchg always fails if it interrupted an update */
|
||||
|
@ -703,16 +717,18 @@ static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
|
|||
|
||||
cnt2 = cnt + 1;
|
||||
|
||||
rb_time_split(val, &top, &bottom);
|
||||
rb_time_split(val, &top, &bottom, &msb);
|
||||
top = rb_time_val_cnt(top, cnt);
|
||||
bottom = rb_time_val_cnt(bottom, cnt);
|
||||
|
||||
rb_time_split(set, &top2, &bottom2);
|
||||
rb_time_split(set, &top2, &bottom2, &msb2);
|
||||
top2 = rb_time_val_cnt(top2, cnt2);
|
||||
bottom2 = rb_time_val_cnt(bottom2, cnt2);
|
||||
|
||||
if (!rb_time_read_cmpxchg(&t->cnt, cnt, cnt2))
|
||||
return false;
|
||||
if (!rb_time_read_cmpxchg(&t->msb, msb, msb2))
|
||||
return false;
|
||||
if (!rb_time_read_cmpxchg(&t->top, top, top2))
|
||||
return false;
|
||||
if (!rb_time_read_cmpxchg(&t->bottom, bottom, bottom2))
|
||||
|
@ -783,6 +799,24 @@ static inline void verify_event(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The absolute time stamp drops the 5 MSBs and some clocks may
|
||||
* require them. The rb_fix_abs_ts() will take a previous full
|
||||
* time stamp, and add the 5 MSB of that time stamp on to the
|
||||
* saved absolute time stamp. Then they are compared in case of
|
||||
* the unlikely event that the latest time stamp incremented
|
||||
* the 5 MSB.
|
||||
*/
|
||||
static inline u64 rb_fix_abs_ts(u64 abs, u64 save_ts)
|
||||
{
|
||||
if (save_ts & TS_MSB) {
|
||||
abs |= save_ts & TS_MSB;
|
||||
/* Check for overflow */
|
||||
if (unlikely(abs < save_ts))
|
||||
abs += 1ULL << 59;
|
||||
}
|
||||
return abs;
|
||||
}
|
||||
|
||||
static inline u64 rb_time_stamp(struct trace_buffer *buffer);
|
||||
|
||||
|
@ -811,8 +845,10 @@ u64 ring_buffer_event_time_stamp(struct trace_buffer *buffer,
|
|||
u64 ts;
|
||||
|
||||
/* If the event includes an absolute time, then just use that */
|
||||
if (event->type_len == RINGBUF_TYPE_TIME_STAMP)
|
||||
return rb_event_time_stamp(event);
|
||||
if (event->type_len == RINGBUF_TYPE_TIME_STAMP) {
|
||||
ts = rb_event_time_stamp(event);
|
||||
return rb_fix_abs_ts(ts, cpu_buffer->tail_page->page->time_stamp);
|
||||
}
|
||||
|
||||
nest = local_read(&cpu_buffer->committing);
|
||||
verify_event(cpu_buffer, event);
|
||||
|
@ -2754,8 +2790,15 @@ static void rb_add_timestamp(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
(RB_ADD_STAMP_FORCE | RB_ADD_STAMP_ABSOLUTE);
|
||||
|
||||
if (unlikely(info->delta > (1ULL << 59))) {
|
||||
/*
|
||||
* Some timers can use more than 59 bits, and when a timestamp
|
||||
* is added to the buffer, it will lose those bits.
|
||||
*/
|
||||
if (abs && (info->ts & TS_MSB)) {
|
||||
info->delta &= ABS_TS_MASK;
|
||||
|
||||
/* did the clock go backwards */
|
||||
if (info->before == info->after && info->before > info->ts) {
|
||||
} else if (info->before == info->after && info->before > info->ts) {
|
||||
/* not interrupted */
|
||||
static int once;
|
||||
|
||||
|
@ -3304,7 +3347,7 @@ static void dump_buffer_page(struct buffer_data_page *bpage,
|
|||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
delta = rb_event_time_stamp(event);
|
||||
ts = delta;
|
||||
ts = rb_fix_abs_ts(delta, ts);
|
||||
pr_warn(" [%lld] absolute:%lld TIME STAMP\n", ts, delta);
|
||||
break;
|
||||
|
||||
|
@ -3380,7 +3423,7 @@ static void check_buffer(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
delta = rb_event_time_stamp(event);
|
||||
ts = delta;
|
||||
ts = rb_fix_abs_ts(delta, ts);
|
||||
break;
|
||||
|
||||
case RINGBUF_TYPE_PADDING:
|
||||
|
@ -4367,6 +4410,7 @@ rb_update_read_stamp(struct ring_buffer_per_cpu *cpu_buffer,
|
|||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
delta = rb_event_time_stamp(event);
|
||||
delta = rb_fix_abs_ts(delta, cpu_buffer->read_stamp);
|
||||
cpu_buffer->read_stamp = delta;
|
||||
return;
|
||||
|
||||
|
@ -4397,6 +4441,7 @@ rb_update_iter_read_stamp(struct ring_buffer_iter *iter,
|
|||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
delta = rb_event_time_stamp(event);
|
||||
delta = rb_fix_abs_ts(delta, iter->read_stamp);
|
||||
iter->read_stamp = delta;
|
||||
return;
|
||||
|
||||
|
@ -4650,6 +4695,7 @@ rb_buffer_peek(struct ring_buffer_per_cpu *cpu_buffer, u64 *ts,
|
|||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
if (ts) {
|
||||
*ts = rb_event_time_stamp(event);
|
||||
*ts = rb_fix_abs_ts(*ts, reader->page->time_stamp);
|
||||
ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
|
||||
cpu_buffer->cpu, ts);
|
||||
}
|
||||
|
@ -4741,6 +4787,7 @@ rb_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
|
|||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
if (ts) {
|
||||
*ts = rb_event_time_stamp(event);
|
||||
*ts = rb_fix_abs_ts(*ts, iter->head_page->page->time_stamp);
|
||||
ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
|
||||
cpu_buffer->cpu, ts);
|
||||
}
|
||||
|
@ -6011,10 +6058,10 @@ static __init int test_ringbuffer(void)
|
|||
pr_info(" total events: %ld\n", total_lost + total_read);
|
||||
pr_info(" recorded len bytes: %ld\n", total_len);
|
||||
pr_info(" recorded size bytes: %ld\n", total_size);
|
||||
if (total_lost)
|
||||
if (total_lost) {
|
||||
pr_info(" With dropped events, record len and size may not match\n"
|
||||
" alloced and written from above\n");
|
||||
if (!total_lost) {
|
||||
} else {
|
||||
if (RB_WARN_ON(buffer, total_len != total_alloc ||
|
||||
total_size != total_written))
|
||||
break;
|
||||
|
|
|
@ -721,13 +721,16 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
|
|||
pos = 0;
|
||||
|
||||
ret = trace_get_user(&parser, ubuf, cnt, &pos);
|
||||
if (ret < 0 || !trace_parser_loaded(&parser))
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
read += ret;
|
||||
ubuf += ret;
|
||||
cnt -= ret;
|
||||
|
||||
if (!trace_parser_loaded(&parser))
|
||||
break;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (kstrtoul(parser.buffer, 0, &val))
|
||||
break;
|
||||
|
@ -753,7 +756,6 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
|
|||
if (!nr_pids) {
|
||||
/* Cleared the list of pids */
|
||||
trace_pid_list_free(pid_list);
|
||||
read = ret;
|
||||
pid_list = NULL;
|
||||
}
|
||||
|
||||
|
@ -1174,7 +1176,7 @@ void tracing_snapshot_cond(struct trace_array *tr, void *cond_data)
|
|||
EXPORT_SYMBOL_GPL(tracing_snapshot_cond);
|
||||
|
||||
/**
|
||||
* tracing_snapshot_cond_data - get the user data associated with a snapshot
|
||||
* tracing_cond_snapshot_data - get the user data associated with a snapshot
|
||||
* @tr: The tracing instance
|
||||
*
|
||||
* When the user enables a conditional snapshot using
|
||||
|
@ -1542,6 +1544,7 @@ static struct {
|
|||
{ ktime_get_mono_fast_ns, "mono", 1 },
|
||||
{ ktime_get_raw_fast_ns, "mono_raw", 1 },
|
||||
{ ktime_get_boot_fast_ns, "boot", 1 },
|
||||
{ ktime_get_tai_fast_ns, "tai", 1 },
|
||||
ARCH_TRACE_CLOCKS
|
||||
};
|
||||
|
||||
|
@ -2835,7 +2838,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
|
||||
|
||||
static DEFINE_SPINLOCK(tracepoint_iter_lock);
|
||||
static DEFINE_RAW_SPINLOCK(tracepoint_iter_lock);
|
||||
static DEFINE_MUTEX(tracepoint_printk_mutex);
|
||||
|
||||
static void output_printk(struct trace_event_buffer *fbuffer)
|
||||
|
@ -2863,14 +2866,14 @@ static void output_printk(struct trace_event_buffer *fbuffer)
|
|||
|
||||
event = &fbuffer->trace_file->event_call->event;
|
||||
|
||||
spin_lock_irqsave(&tracepoint_iter_lock, flags);
|
||||
raw_spin_lock_irqsave(&tracepoint_iter_lock, flags);
|
||||
trace_seq_init(&iter->seq);
|
||||
iter->ent = fbuffer->entry;
|
||||
event_call->event.funcs->trace(iter, 0, event);
|
||||
trace_seq_putc(&iter->seq, 0);
|
||||
printk("%s", iter->seq.buffer);
|
||||
|
||||
spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
|
||||
}
|
||||
|
||||
int tracepoint_printk_sysctl(struct ctl_table *table, int write,
|
||||
|
@ -4249,7 +4252,7 @@ static void print_func_help_header_irq(struct array_buffer *buf, struct seq_file
|
|||
unsigned int flags)
|
||||
{
|
||||
bool tgid = flags & TRACE_ITER_RECORD_TGID;
|
||||
const char *space = " ";
|
||||
static const char space[] = " ";
|
||||
int prec = tgid ? 12 : 2;
|
||||
|
||||
print_event_info(buf, m);
|
||||
|
@ -4273,9 +4276,7 @@ print_trace_header(struct seq_file *m, struct trace_iterator *iter)
|
|||
struct tracer *type = iter->trace;
|
||||
unsigned long entries;
|
||||
unsigned long total;
|
||||
const char *name = "preemption";
|
||||
|
||||
name = type->name;
|
||||
const char *name = type->name;
|
||||
|
||||
get_total_entries(buf, &total, &entries);
|
||||
|
||||
|
@ -5469,7 +5470,7 @@ static const char readme_msg[] =
|
|||
" error_log\t- error log for failed commands (that support it)\n"
|
||||
" buffer_size_kb\t- view and modify size of per cpu buffer\n"
|
||||
" buffer_total_size_kb - view total size of all cpu buffers\n\n"
|
||||
" trace_clock\t\t-change the clock used to order events\n"
|
||||
" trace_clock\t\t- change the clock used to order events\n"
|
||||
" local: Per cpu clock but may not be synced across CPUs\n"
|
||||
" global: Synced across CPUs but slows tracing down.\n"
|
||||
" counter: Not a clock, but just an increment\n"
|
||||
|
@ -5478,7 +5479,7 @@ static const char readme_msg[] =
|
|||
#ifdef CONFIG_X86_64
|
||||
" x86-tsc: TSC cycle counter\n"
|
||||
#endif
|
||||
"\n timestamp_mode\t-view the mode used to timestamp events\n"
|
||||
"\n timestamp_mode\t- view the mode used to timestamp events\n"
|
||||
" delta: Delta difference against a buffer-wide timestamp\n"
|
||||
" absolute: Absolute (standalone) timestamp\n"
|
||||
"\n trace_marker\t\t- Writes into this file writes into the kernel buffer\n"
|
||||
|
@ -6326,12 +6327,18 @@ static void tracing_set_nop(struct trace_array *tr)
|
|||
tr->current_trace = &nop_trace;
|
||||
}
|
||||
|
||||
static bool tracer_options_updated;
|
||||
|
||||
static void add_tracer_options(struct trace_array *tr, struct tracer *t)
|
||||
{
|
||||
/* Only enable if the directory has been created already. */
|
||||
if (!tr->dir)
|
||||
return;
|
||||
|
||||
/* Only create trace option files after update_tracer_options finish */
|
||||
if (!tracer_options_updated)
|
||||
return;
|
||||
|
||||
create_trace_option_files(tr, t);
|
||||
}
|
||||
|
||||
|
@ -6448,7 +6455,7 @@ tracing_set_trace_write(struct file *filp, const char __user *ubuf,
|
|||
{
|
||||
struct trace_array *tr = filp->private_data;
|
||||
char buf[MAX_TRACER_SIZE+1];
|
||||
int i;
|
||||
char *name;
|
||||
size_t ret;
|
||||
int err;
|
||||
|
||||
|
@ -6462,11 +6469,9 @@ tracing_set_trace_write(struct file *filp, const char __user *ubuf,
|
|||
|
||||
buf[cnt] = 0;
|
||||
|
||||
/* strip ending whitespace. */
|
||||
for (i = cnt - 1; i > 0 && isspace(buf[i]); i--)
|
||||
buf[i] = 0;
|
||||
name = strim(buf);
|
||||
|
||||
err = tracing_set_tracer(tr, buf);
|
||||
err = tracing_set_tracer(tr, name);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -9170,6 +9175,7 @@ static void __update_tracer_options(struct trace_array *tr)
|
|||
static void update_tracer_options(struct trace_array *tr)
|
||||
{
|
||||
mutex_lock(&trace_types_lock);
|
||||
tracer_options_updated = true;
|
||||
__update_tracer_options(tr);
|
||||
mutex_unlock(&trace_types_lock);
|
||||
}
|
||||
|
@ -9602,6 +9608,7 @@ extern struct trace_eval_map *__stop_ftrace_eval_maps[];
|
|||
|
||||
static struct workqueue_struct *eval_map_wq __initdata;
|
||||
static struct work_struct eval_map_work __initdata;
|
||||
static struct work_struct tracerfs_init_work __initdata;
|
||||
|
||||
static void __init eval_map_work_func(struct work_struct *work)
|
||||
{
|
||||
|
@ -9627,6 +9634,8 @@ static int __init trace_eval_init(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
subsys_initcall(trace_eval_init);
|
||||
|
||||
static int __init trace_eval_sync(void)
|
||||
{
|
||||
/* Make sure the eval map updates are finished */
|
||||
|
@ -9709,15 +9718,8 @@ static struct notifier_block trace_module_nb = {
|
|||
};
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
||||
static __init int tracer_init_tracefs(void)
|
||||
static __init void tracer_init_tracefs_work_func(struct work_struct *work)
|
||||
{
|
||||
int ret;
|
||||
|
||||
trace_access_lock_init();
|
||||
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
event_trace_init();
|
||||
|
||||
|
@ -9739,8 +9741,6 @@ static __init int tracer_init_tracefs(void)
|
|||
trace_create_file("saved_tgids", TRACE_MODE_READ, NULL,
|
||||
NULL, &tracing_saved_tgids_fops);
|
||||
|
||||
trace_eval_init();
|
||||
|
||||
trace_create_eval_file(NULL);
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
@ -9755,6 +9755,24 @@ static __init int tracer_init_tracefs(void)
|
|||
create_trace_instances(NULL);
|
||||
|
||||
update_tracer_options(&global_trace);
|
||||
}
|
||||
|
||||
static __init int tracer_init_tracefs(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
trace_access_lock_init();
|
||||
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
if (eval_map_wq) {
|
||||
INIT_WORK(&tracerfs_init_work, tracer_init_tracefs_work_func);
|
||||
queue_work(eval_map_wq, &tracerfs_init_work);
|
||||
} else {
|
||||
tracer_init_tracefs_work_func(NULL);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1573,13 +1573,12 @@ struct enable_trigger_data {
|
|||
};
|
||||
|
||||
extern int event_enable_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
extern void event_enable_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
extern void event_enable_trigger_free(struct event_trigger_data *data);
|
||||
extern int event_enable_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param);
|
||||
char *glob, char *cmd,
|
||||
char *param_and_filter);
|
||||
extern int event_enable_register_trigger(char *glob,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file);
|
||||
|
@ -1587,8 +1586,7 @@ extern void event_enable_unregister_trigger(char *glob,
|
|||
struct event_trigger_data *test,
|
||||
struct trace_event_file *file);
|
||||
extern void trigger_data_free(struct event_trigger_data *data);
|
||||
extern int event_trigger_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
extern int event_trigger_init(struct event_trigger_data *data);
|
||||
extern int trace_event_trigger_enable_disable(struct trace_event_file *file,
|
||||
int trigger_enable);
|
||||
extern void update_cond_flag(struct trace_event_file *file);
|
||||
|
@ -1629,10 +1627,11 @@ extern void event_trigger_reset_filter(struct event_command *cmd_ops,
|
|||
extern int event_trigger_register(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob,
|
||||
char *cmd,
|
||||
char *trigger,
|
||||
struct event_trigger_data *trigger_data,
|
||||
int *n_registered);
|
||||
struct event_trigger_data *trigger_data);
|
||||
extern void event_trigger_unregister(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob,
|
||||
struct event_trigger_data *trigger_data);
|
||||
|
||||
/**
|
||||
* struct event_trigger_ops - callbacks for trace event triggers
|
||||
|
@ -1686,12 +1685,9 @@ struct event_trigger_ops {
|
|||
struct trace_buffer *buffer,
|
||||
void *rec,
|
||||
struct ring_buffer_event *rbe);
|
||||
int (*init)(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
void (*free)(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
int (*init)(struct event_trigger_data *data);
|
||||
void (*free)(struct event_trigger_data *data);
|
||||
int (*print)(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
};
|
||||
|
||||
|
|
|
@ -300,7 +300,7 @@ trace_boot_hist_add_handlers(struct xbc_node *hnode, char **bufp,
|
|||
{
|
||||
struct xbc_node *node;
|
||||
const char *p, *handler;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
handler = xbc_node_get_data(hnode);
|
||||
|
||||
|
|
|
@ -255,19 +255,14 @@ static const struct file_operations dynamic_events_ops = {
|
|||
/* Make a tracefs interface for controlling dynamic events */
|
||||
static __init int init_dynamic_event(void)
|
||||
{
|
||||
struct dentry *entry;
|
||||
int ret;
|
||||
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("dynamic_events", TRACE_MODE_WRITE, NULL,
|
||||
NULL, &dynamic_events_ops);
|
||||
|
||||
/* Event list interface */
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'dynamic_events' entry\n");
|
||||
trace_create_file("dynamic_events", TRACE_MODE_WRITE, NULL,
|
||||
NULL, &dynamic_events_ops);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -511,20 +511,17 @@ __eprobe_trace_func(struct eprobe_data *edata, void *rec)
|
|||
* functions are just stubs to fulfill what is needed to use the trigger
|
||||
* infrastructure.
|
||||
*/
|
||||
static int eprobe_trigger_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static int eprobe_trigger_init(struct event_trigger_data *data)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void eprobe_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static void eprobe_trigger_free(struct event_trigger_data *data)
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
static int eprobe_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
/* Do not print eprobe event triggers */
|
||||
|
@ -549,7 +546,8 @@ static struct event_trigger_ops eprobe_trigger_ops = {
|
|||
|
||||
static int eprobe_trigger_cmd_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
char *glob, char *cmd,
|
||||
char *param_and_filter)
|
||||
{
|
||||
return -1;
|
||||
}
|
||||
|
@ -650,7 +648,7 @@ static struct trace_event_functions eprobe_funcs = {
|
|||
static int disable_eprobe(struct trace_eprobe *ep,
|
||||
struct trace_array *tr)
|
||||
{
|
||||
struct event_trigger_data *trigger;
|
||||
struct event_trigger_data *trigger = NULL, *iter;
|
||||
struct trace_event_file *file;
|
||||
struct eprobe_data *edata;
|
||||
|
||||
|
@ -658,14 +656,16 @@ static int disable_eprobe(struct trace_eprobe *ep,
|
|||
if (!file)
|
||||
return -ENOENT;
|
||||
|
||||
list_for_each_entry(trigger, &file->triggers, list) {
|
||||
if (!(trigger->flags & EVENT_TRIGGER_FL_PROBE))
|
||||
list_for_each_entry(iter, &file->triggers, list) {
|
||||
if (!(iter->flags & EVENT_TRIGGER_FL_PROBE))
|
||||
continue;
|
||||
edata = trigger->private_data;
|
||||
if (edata->ep == ep)
|
||||
edata = iter->private_data;
|
||||
if (edata->ep == ep) {
|
||||
trigger = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (list_entry_is_head(trigger, &file->triggers, list))
|
||||
if (!trigger)
|
||||
return -ENODEV;
|
||||
|
||||
list_del_rcu(&trigger->list);
|
||||
|
|
|
@ -392,12 +392,6 @@ static void test_event_printk(struct trace_event_call *call)
|
|||
if (!(dereference_flags & (1ULL << arg)))
|
||||
goto next_arg;
|
||||
|
||||
/* Check for __get_sockaddr */;
|
||||
if (str_has_prefix(fmt + i, "__get_sockaddr(")) {
|
||||
dereference_flags &= ~(1ULL << arg);
|
||||
goto next_arg;
|
||||
}
|
||||
|
||||
/* Find the REC-> in the argument */
|
||||
c = strchr(fmt + i, ',');
|
||||
r = strstr(fmt + i, "REC->");
|
||||
|
@ -413,7 +407,14 @@ static void test_event_printk(struct trace_event_call *call)
|
|||
a = strchr(fmt + i, '&');
|
||||
if ((a && (a < r)) || test_field(r, call))
|
||||
dereference_flags &= ~(1ULL << arg);
|
||||
} else if ((r = strstr(fmt + i, "__get_dynamic_array(")) &&
|
||||
(!c || r < c)) {
|
||||
dereference_flags &= ~(1ULL << arg);
|
||||
} else if ((r = strstr(fmt + i, "__get_sockaddr(")) &&
|
||||
(!c || r < c)) {
|
||||
dereference_flags &= ~(1ULL << arg);
|
||||
}
|
||||
|
||||
next_arg:
|
||||
i--;
|
||||
arg++;
|
||||
|
@ -1723,9 +1724,9 @@ static LIST_HEAD(event_subsystems);
|
|||
|
||||
static int subsystem_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct trace_subsystem_dir *dir = NULL, *iter_dir;
|
||||
struct trace_array *tr = NULL, *iter_tr;
|
||||
struct event_subsystem *system = NULL;
|
||||
struct trace_subsystem_dir *dir = NULL; /* Initialize for gcc */
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
if (tracing_is_disabled())
|
||||
|
@ -1734,10 +1735,12 @@ static int subsystem_open(struct inode *inode, struct file *filp)
|
|||
/* Make sure the system still exists */
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
list_for_each_entry(dir, &tr->systems, list) {
|
||||
if (dir == inode->i_private) {
|
||||
list_for_each_entry(iter_tr, &ftrace_trace_arrays, list) {
|
||||
list_for_each_entry(iter_dir, &iter_tr->systems, list) {
|
||||
if (iter_dir == inode->i_private) {
|
||||
/* Don't open systems with no events */
|
||||
tr = iter_tr;
|
||||
dir = iter_dir;
|
||||
if (dir->nr_events) {
|
||||
__get_system_dir(dir);
|
||||
system = dir->subsystem;
|
||||
|
@ -1753,9 +1756,6 @@ static int subsystem_open(struct inode *inode, struct file *filp)
|
|||
if (!system)
|
||||
return -ENODEV;
|
||||
|
||||
/* Some versions of gcc think dir can be uninitialized here */
|
||||
WARN_ON(!dir);
|
||||
|
||||
/* Still need to increment the ref count of the system */
|
||||
if (trace_array_get(tr) < 0) {
|
||||
put_system(dir);
|
||||
|
@ -2280,8 +2280,8 @@ static struct dentry *
|
|||
event_subsystem_dir(struct trace_array *tr, const char *name,
|
||||
struct trace_event_file *file, struct dentry *parent)
|
||||
{
|
||||
struct event_subsystem *system, *iter;
|
||||
struct trace_subsystem_dir *dir;
|
||||
struct event_subsystem *system;
|
||||
struct dentry *entry;
|
||||
|
||||
/* First see if we did not already create this dir */
|
||||
|
@ -2295,13 +2295,13 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
|
|||
}
|
||||
|
||||
/* Now see if the system itself exists. */
|
||||
list_for_each_entry(system, &event_subsystems, list) {
|
||||
if (strcmp(system->name, name) == 0)
|
||||
system = NULL;
|
||||
list_for_each_entry(iter, &event_subsystems, list) {
|
||||
if (strcmp(iter->name, name) == 0) {
|
||||
system = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
/* Reset system variable when not found */
|
||||
if (&system->list == &event_subsystems)
|
||||
system = NULL;
|
||||
|
||||
dir = kmalloc(sizeof(*dir), GFP_KERNEL);
|
||||
if (!dir)
|
||||
|
@ -3546,12 +3546,10 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
|
|||
struct dentry *d_events;
|
||||
struct dentry *entry;
|
||||
|
||||
entry = tracefs_create_file("set_event", TRACE_MODE_WRITE, parent,
|
||||
tr, &ftrace_set_event_fops);
|
||||
if (!entry) {
|
||||
pr_warn("Could not create tracefs 'set_event' entry\n");
|
||||
entry = trace_create_file("set_event", TRACE_MODE_WRITE, parent,
|
||||
tr, &ftrace_set_event_fops);
|
||||
if (!entry)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
d_events = tracefs_create_dir("events", parent);
|
||||
if (!d_events) {
|
||||
|
@ -3566,16 +3564,12 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
|
|||
|
||||
/* There are not as crucial, just warn if they are not created */
|
||||
|
||||
entry = tracefs_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
|
||||
tr, &ftrace_set_event_pid_fops);
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'set_event_pid' entry\n");
|
||||
trace_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
|
||||
tr, &ftrace_set_event_pid_fops);
|
||||
|
||||
entry = tracefs_create_file("set_event_notrace_pid",
|
||||
TRACE_MODE_WRITE, parent, tr,
|
||||
&ftrace_set_event_notrace_pid_fops);
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'set_event_notrace_pid' entry\n");
|
||||
trace_create_file("set_event_notrace_pid",
|
||||
TRACE_MODE_WRITE, parent, tr,
|
||||
&ftrace_set_event_notrace_pid_fops);
|
||||
|
||||
/* ring buffer internal formats */
|
||||
trace_create_file("header_page", TRACE_MODE_READ, d_events,
|
||||
|
@ -3790,17 +3784,14 @@ static __init int event_trace_init_fields(void)
|
|||
__init int event_trace_init(void)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
struct dentry *entry;
|
||||
int ret;
|
||||
|
||||
tr = top_trace_array();
|
||||
if (!tr)
|
||||
return -ENODEV;
|
||||
|
||||
entry = tracefs_create_file("available_events", TRACE_MODE_READ,
|
||||
NULL, tr, &ftrace_avail_fops);
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'available_events' entry\n");
|
||||
trace_create_file("available_events", TRACE_MODE_READ,
|
||||
NULL, tr, &ftrace_avail_fops);
|
||||
|
||||
ret = early_event_add_tracer(NULL, tr);
|
||||
if (ret)
|
||||
|
|
|
@ -1816,7 +1816,7 @@ static void create_filter_finish(struct filter_parse_error *pe)
|
|||
* create_filter - create a filter for a trace_event_call
|
||||
* @tr: the trace array associated with these events
|
||||
* @call: trace_event_call to create a filter for
|
||||
* @filter_str: filter string
|
||||
* @filter_string: filter string
|
||||
* @set_str: remember @filter_str and enable detailed error in filter
|
||||
* @filterp: out param for created filter (always updated on return)
|
||||
* Must be a pointer that references a NULL pointer.
|
||||
|
|
|
@ -2093,8 +2093,11 @@ static int init_var_ref(struct hist_field *ref_field,
|
|||
return err;
|
||||
free:
|
||||
kfree(ref_field->system);
|
||||
ref_field->system = NULL;
|
||||
kfree(ref_field->event_name);
|
||||
ref_field->event_name = NULL;
|
||||
kfree(ref_field->name);
|
||||
ref_field->name = NULL;
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
@ -2785,7 +2788,8 @@ static char *find_trigger_filter(struct hist_trigger_data *hist_data,
|
|||
static struct event_command trigger_hist_cmd;
|
||||
static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param);
|
||||
char *glob, char *cmd,
|
||||
char *param_and_filter);
|
||||
|
||||
static bool compatible_keys(struct hist_trigger_data *target_hist_data,
|
||||
struct hist_trigger_data *hist_data,
|
||||
|
@ -4161,7 +4165,7 @@ static int create_val_field(struct hist_trigger_data *hist_data,
|
|||
return __create_val_field(hist_data, val_idx, file, NULL, field_str, 0);
|
||||
}
|
||||
|
||||
static const char *no_comm = "(no comm)";
|
||||
static const char no_comm[] = "(no comm)";
|
||||
|
||||
static u64 hist_field_execname(struct hist_field *hist_field,
|
||||
struct tracing_map_elt *elt,
|
||||
|
@ -5252,7 +5256,7 @@ static void hist_trigger_show(struct seq_file *m,
|
|||
seq_puts(m, "\n\n");
|
||||
|
||||
seq_puts(m, "# event histogram\n#\n# trigger info: ");
|
||||
data->ops->print(m, data->ops, data);
|
||||
data->ops->print(m, data);
|
||||
seq_puts(m, "#\n\n");
|
||||
|
||||
hist_data = data->private_data;
|
||||
|
@ -5484,7 +5488,7 @@ static void hist_trigger_debug_show(struct seq_file *m,
|
|||
seq_puts(m, "\n\n");
|
||||
|
||||
seq_puts(m, "# event histogram\n#\n# trigger info: ");
|
||||
data->ops->print(m, data->ops, data);
|
||||
data->ops->print(m, data);
|
||||
seq_puts(m, "#\n\n");
|
||||
|
||||
hist_data = data->private_data;
|
||||
|
@ -5621,7 +5625,6 @@ static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
|
|||
}
|
||||
|
||||
static int event_hist_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
|
@ -5729,8 +5732,7 @@ static int event_hist_trigger_print(struct seq_file *m,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int event_hist_trigger_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static int event_hist_trigger_init(struct event_trigger_data *data)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
|
||||
|
@ -5758,8 +5760,7 @@ static void unregister_field_var_hists(struct hist_trigger_data *hist_data)
|
|||
}
|
||||
}
|
||||
|
||||
static void event_hist_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static void event_hist_trigger_free(struct event_trigger_data *data)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
|
||||
|
@ -5788,25 +5789,23 @@ static struct event_trigger_ops event_hist_trigger_ops = {
|
|||
.free = event_hist_trigger_free,
|
||||
};
|
||||
|
||||
static int event_hist_trigger_named_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static int event_hist_trigger_named_init(struct event_trigger_data *data)
|
||||
{
|
||||
data->ref++;
|
||||
|
||||
save_named_trigger(data->named_data->name, data);
|
||||
|
||||
event_hist_trigger_init(ops, data->named_data);
|
||||
event_hist_trigger_init(data->named_data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void event_hist_trigger_named_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
static void event_hist_trigger_named_free(struct event_trigger_data *data)
|
||||
{
|
||||
if (WARN_ON_ONCE(data->ref <= 0))
|
||||
return;
|
||||
|
||||
event_hist_trigger_free(ops, data->named_data);
|
||||
event_hist_trigger_free(data->named_data);
|
||||
|
||||
data->ref--;
|
||||
if (!data->ref) {
|
||||
|
@ -5933,6 +5932,48 @@ static bool hist_trigger_match(struct event_trigger_data *data,
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool existing_hist_update_only(char *glob,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
struct event_trigger_data *test, *named_data = NULL;
|
||||
bool updated = false;
|
||||
|
||||
if (!hist_data->attrs->pause && !hist_data->attrs->cont &&
|
||||
!hist_data->attrs->clear)
|
||||
goto out;
|
||||
|
||||
if (hist_data->attrs->name) {
|
||||
named_data = find_named_trigger(hist_data->attrs->name);
|
||||
if (named_data) {
|
||||
if (!hist_trigger_match(data, named_data, named_data,
|
||||
true))
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
if (hist_data->attrs->name && !named_data)
|
||||
goto out;
|
||||
|
||||
list_for_each_entry(test, &file->triggers, list) {
|
||||
if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
if (!hist_trigger_match(data, test, named_data, false))
|
||||
continue;
|
||||
if (hist_data->attrs->pause)
|
||||
test->paused = true;
|
||||
else if (hist_data->attrs->cont)
|
||||
test->paused = false;
|
||||
else if (hist_data->attrs->clear)
|
||||
hist_clear(test);
|
||||
updated = true;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
out:
|
||||
return updated;
|
||||
}
|
||||
|
||||
static int hist_register_trigger(char *glob,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
|
@ -5961,19 +6002,11 @@ static int hist_register_trigger(char *glob,
|
|||
|
||||
list_for_each_entry(test, &file->triggers, list) {
|
||||
if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
if (!hist_trigger_match(data, test, named_data, false))
|
||||
continue;
|
||||
if (hist_data->attrs->pause)
|
||||
test->paused = true;
|
||||
else if (hist_data->attrs->cont)
|
||||
test->paused = false;
|
||||
else if (hist_data->attrs->clear)
|
||||
hist_clear(test);
|
||||
else {
|
||||
if (hist_trigger_match(data, test, named_data, false)) {
|
||||
hist_err(tr, HIST_ERR_TRIGGER_EEXIST, 0);
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
new:
|
||||
|
@ -5993,7 +6026,7 @@ static int hist_register_trigger(char *glob,
|
|||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data->ops, data);
|
||||
ret = data->ops->init(data);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
@ -6012,8 +6045,6 @@ static int hist_register_trigger(char *glob,
|
|||
|
||||
if (named_data)
|
||||
destroy_hist_data(hist_data);
|
||||
|
||||
ret++;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
@ -6089,20 +6120,19 @@ static void hist_unregister_trigger(char *glob,
|
|||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
struct event_trigger_data *test = NULL, *iter, *named_data = NULL;
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
struct event_trigger_data *test, *named_data = NULL;
|
||||
bool unregistered = false;
|
||||
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
if (hist_data->attrs->name)
|
||||
named_data = find_named_trigger(hist_data->attrs->name);
|
||||
|
||||
list_for_each_entry(test, &file->triggers, list) {
|
||||
if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
if (!hist_trigger_match(data, test, named_data, false))
|
||||
list_for_each_entry(iter, &file->triggers, list) {
|
||||
if (iter->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
if (!hist_trigger_match(data, iter, named_data, false))
|
||||
continue;
|
||||
unregistered = true;
|
||||
test = iter;
|
||||
list_del_rcu(&test->list);
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
update_cond_flag(file);
|
||||
|
@ -6110,11 +6140,11 @@ static void hist_unregister_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (unregistered && test->ops->free)
|
||||
test->ops->free(test->ops, test);
|
||||
if (test && test->ops->free)
|
||||
test->ops->free(test);
|
||||
|
||||
if (hist_data->enable_timestamps) {
|
||||
if (!hist_data->remove || unregistered)
|
||||
if (!hist_data->remove || test)
|
||||
tracing_set_filter_buffering(file->tr, false);
|
||||
}
|
||||
}
|
||||
|
@ -6164,57 +6194,57 @@ static void hist_unreg_all(struct trace_event_file *file)
|
|||
if (hist_data->enable_timestamps)
|
||||
tracing_set_filter_buffering(file->tr, false);
|
||||
if (test->ops->free)
|
||||
test->ops->free(test->ops, test);
|
||||
test->ops->free(test);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
char *glob, char *cmd,
|
||||
char *param_and_filter)
|
||||
{
|
||||
unsigned int hist_trigger_bits = TRACING_MAP_BITS_DEFAULT;
|
||||
struct event_trigger_data *trigger_data;
|
||||
struct hist_trigger_attrs *attrs;
|
||||
struct event_trigger_ops *trigger_ops;
|
||||
struct hist_trigger_data *hist_data;
|
||||
char *param, *filter, *p, *start;
|
||||
struct synth_event *se;
|
||||
const char *se_name;
|
||||
bool remove = false;
|
||||
char *trigger, *p, *start;
|
||||
bool remove;
|
||||
int ret = 0;
|
||||
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
WARN_ON(!glob);
|
||||
|
||||
if (strlen(glob)) {
|
||||
hist_err_clear();
|
||||
last_cmd_set(file, param);
|
||||
}
|
||||
|
||||
if (!param)
|
||||
if (WARN_ON(!glob))
|
||||
return -EINVAL;
|
||||
|
||||
if (glob[0] == '!')
|
||||
remove = true;
|
||||
if (glob[0]) {
|
||||
hist_err_clear();
|
||||
last_cmd_set(file, param_and_filter);
|
||||
}
|
||||
|
||||
remove = event_trigger_check_remove(glob);
|
||||
|
||||
if (event_trigger_empty_param(param_and_filter))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* separate the trigger from the filter (k:v [if filter])
|
||||
* allowing for whitespace in the trigger
|
||||
*/
|
||||
p = trigger = param;
|
||||
p = param = param_and_filter;
|
||||
do {
|
||||
p = strstr(p, "if");
|
||||
if (!p)
|
||||
break;
|
||||
if (p == param)
|
||||
if (p == param_and_filter)
|
||||
return -EINVAL;
|
||||
if (*(p - 1) != ' ' && *(p - 1) != '\t') {
|
||||
p++;
|
||||
continue;
|
||||
}
|
||||
if (p >= param + strlen(param) - (sizeof("if") - 1) - 1)
|
||||
if (p >= param_and_filter + strlen(param_and_filter) - (sizeof("if") - 1) - 1)
|
||||
return -EINVAL;
|
||||
if (*(p + sizeof("if") - 1) != ' ' && *(p + sizeof("if") - 1) != '\t') {
|
||||
p++;
|
||||
|
@ -6224,24 +6254,24 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
} while (1);
|
||||
|
||||
if (!p)
|
||||
param = NULL;
|
||||
filter = NULL;
|
||||
else {
|
||||
*(p - 1) = '\0';
|
||||
param = strstrip(p);
|
||||
trigger = strstrip(trigger);
|
||||
filter = strstrip(p);
|
||||
param = strstrip(param);
|
||||
}
|
||||
|
||||
/*
|
||||
* To simplify arithmetic expression parsing, replace occurrences of
|
||||
* '.sym-offset' modifier with '.symXoffset'
|
||||
*/
|
||||
start = strstr(trigger, ".sym-offset");
|
||||
start = strstr(param, ".sym-offset");
|
||||
while (start) {
|
||||
*(start + 4) = 'X';
|
||||
start = strstr(start + 11, ".sym-offset");
|
||||
}
|
||||
|
||||
attrs = parse_hist_trigger_attrs(file->tr, trigger);
|
||||
attrs = parse_hist_trigger_attrs(file->tr, param);
|
||||
if (IS_ERR(attrs))
|
||||
return PTR_ERR(attrs);
|
||||
|
||||
|
@ -6254,29 +6284,15 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
return PTR_ERR(hist_data);
|
||||
}
|
||||
|
||||
trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
|
||||
|
||||
trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
|
||||
trigger_data = event_trigger_alloc(cmd_ops, cmd, param, hist_data);
|
||||
if (!trigger_data) {
|
||||
ret = -ENOMEM;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
trigger_data->count = -1;
|
||||
trigger_data->ops = trigger_ops;
|
||||
trigger_data->cmd_ops = cmd_ops;
|
||||
|
||||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
RCU_INIT_POINTER(trigger_data->filter, NULL);
|
||||
|
||||
trigger_data->private_data = hist_data;
|
||||
|
||||
/* if param is non-empty, it's supposed to be a filter */
|
||||
if (param && cmd_ops->set_filter) {
|
||||
ret = cmd_ops->set_filter(param, trigger_data, file);
|
||||
if (ret < 0)
|
||||
goto out_free;
|
||||
}
|
||||
ret = event_trigger_set_filter(cmd_ops, file, filter, trigger_data);
|
||||
if (ret < 0)
|
||||
goto out_free;
|
||||
|
||||
if (remove) {
|
||||
if (!have_hist_trigger_match(trigger_data, file))
|
||||
|
@ -6287,7 +6303,7 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
goto out_free;
|
||||
}
|
||||
|
||||
cmd_ops->unreg(glob+1, trigger_data, file);
|
||||
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
|
||||
se_name = trace_event_name(file->event_call);
|
||||
se = find_synth_event(se_name);
|
||||
if (se)
|
||||
|
@ -6296,17 +6312,11 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
goto out_free;
|
||||
}
|
||||
|
||||
ret = cmd_ops->reg(glob, trigger_data, file);
|
||||
/*
|
||||
* The above returns on success the # of triggers registered,
|
||||
* but if it didn't register any it returns zero. Consider no
|
||||
* triggers registered a failure too.
|
||||
*/
|
||||
if (!ret) {
|
||||
if (!(attrs->pause || attrs->cont || attrs->clear))
|
||||
ret = -ENOENT;
|
||||
if (existing_hist_update_only(glob, trigger_data, file))
|
||||
goto out_free;
|
||||
} else if (ret < 0)
|
||||
|
||||
ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
|
||||
if (ret < 0)
|
||||
goto out_free;
|
||||
|
||||
if (get_named_trigger_data(trigger_data))
|
||||
|
@ -6331,18 +6341,15 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
|
|||
se = find_synth_event(se_name);
|
||||
if (se)
|
||||
se->ref++;
|
||||
/* Just return zero, not the number of registered triggers */
|
||||
ret = 0;
|
||||
out:
|
||||
if (ret == 0)
|
||||
hist_err_clear();
|
||||
|
||||
return ret;
|
||||
out_unreg:
|
||||
cmd_ops->unreg(glob+1, trigger_data, file);
|
||||
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
|
||||
out_free:
|
||||
if (cmd_ops->set_filter)
|
||||
cmd_ops->set_filter(NULL, trigger_data, NULL);
|
||||
event_trigger_reset_filter(cmd_ops, trigger_data);
|
||||
|
||||
remove_hist_vars(hist_data);
|
||||
|
||||
|
@ -6463,7 +6470,7 @@ static void hist_enable_unreg_all(struct trace_event_file *file)
|
|||
update_cond_flag(file);
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
if (test->ops->free)
|
||||
test->ops->free(test->ops, test);
|
||||
test->ops->free(test);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -188,7 +188,7 @@ static int trigger_show(struct seq_file *m, void *v)
|
|||
}
|
||||
|
||||
data = list_entry(v, struct event_trigger_data, list);
|
||||
data->ops->print(m, data->ops, data);
|
||||
data->ops->print(m, data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -432,7 +432,6 @@ event_trigger_print(const char *name, struct seq_file *m,
|
|||
|
||||
/**
|
||||
* event_trigger_init - Generic event_trigger_ops @init implementation
|
||||
* @ops: The trigger ops associated with the trigger
|
||||
* @data: Trigger-specific data
|
||||
*
|
||||
* Common implementation of event trigger initialization.
|
||||
|
@ -442,8 +441,7 @@ event_trigger_print(const char *name, struct seq_file *m,
|
|||
*
|
||||
* Return: 0 on success, errno otherwise
|
||||
*/
|
||||
int event_trigger_init(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
int event_trigger_init(struct event_trigger_data *data)
|
||||
{
|
||||
data->ref++;
|
||||
return 0;
|
||||
|
@ -451,7 +449,6 @@ int event_trigger_init(struct event_trigger_ops *ops,
|
|||
|
||||
/**
|
||||
* event_trigger_free - Generic event_trigger_ops @free implementation
|
||||
* @ops: The trigger ops associated with the trigger
|
||||
* @data: Trigger-specific data
|
||||
*
|
||||
* Common implementation of event trigger de-initialization.
|
||||
|
@ -460,8 +457,7 @@ int event_trigger_init(struct event_trigger_ops *ops,
|
|||
* implementations.
|
||||
*/
|
||||
static void
|
||||
event_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
event_trigger_free(struct event_trigger_data *data)
|
||||
{
|
||||
if (WARN_ON_ONCE(data->ref <= 0))
|
||||
return;
|
||||
|
@ -515,7 +511,7 @@ clear_event_triggers(struct trace_array *tr)
|
|||
trace_event_trigger_enable_disable(file, 0);
|
||||
list_del_rcu(&data->list);
|
||||
if (data->ops->free)
|
||||
data->ops->free(data->ops, data);
|
||||
data->ops->free(data);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -581,19 +577,18 @@ static int register_trigger(char *glob,
|
|||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data->ops, data);
|
||||
ret = data->ops->init(data);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_add_rcu(&data->list, &file->triggers);
|
||||
ret++;
|
||||
|
||||
update_cond_flag(file);
|
||||
if (trace_event_trigger_enable_disable(file, 1) < 0) {
|
||||
ret = trace_event_trigger_enable_disable(file, 1);
|
||||
if (ret < 0) {
|
||||
list_del_rcu(&data->list);
|
||||
update_cond_flag(file);
|
||||
ret--;
|
||||
}
|
||||
out:
|
||||
return ret;
|
||||
|
@ -614,14 +609,13 @@ static void unregister_trigger(char *glob,
|
|||
struct event_trigger_data *test,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
bool unregistered = false;
|
||||
struct event_trigger_data *data = NULL, *iter;
|
||||
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
list_for_each_entry(data, &file->triggers, list) {
|
||||
if (data->cmd_ops->trigger_type == test->cmd_ops->trigger_type) {
|
||||
unregistered = true;
|
||||
list_for_each_entry(iter, &file->triggers, list) {
|
||||
if (iter->cmd_ops->trigger_type == test->cmd_ops->trigger_type) {
|
||||
data = iter;
|
||||
list_del_rcu(&data->list);
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
update_cond_flag(file);
|
||||
|
@ -629,8 +623,8 @@ static void unregister_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (unregistered && data->ops->free)
|
||||
data->ops->free(data->ops, data);
|
||||
if (data && data->ops->free)
|
||||
data->ops->free(data);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -744,15 +738,15 @@ bool event_trigger_empty_param(const char *param)
|
|||
|
||||
/**
|
||||
* event_trigger_separate_filter - separate an event trigger from a filter
|
||||
* @param: The param string containing trigger and possibly filter
|
||||
* @trigger: outparam, will be filled with a pointer to the trigger
|
||||
* @param_and_filter: String containing trigger and possibly filter
|
||||
* @param: outparam, will be filled with a pointer to the trigger
|
||||
* @filter: outparam, will be filled with a pointer to the filter
|
||||
* @param_required: Specifies whether or not the param string is required
|
||||
*
|
||||
* Given a param string of the form '[trigger] [if filter]', this
|
||||
* function separates the filter from the trigger and returns the
|
||||
* trigger in *trigger and the filter in *filter. Either the *trigger
|
||||
* or the *filter may be set to NULL by this function - if not set to
|
||||
* trigger in @param and the filter in @filter. Either the @param
|
||||
* or the @filter may be set to NULL by this function - if not set to
|
||||
* NULL, they will contain strings corresponding to the trigger and
|
||||
* filter.
|
||||
*
|
||||
|
@ -927,48 +921,37 @@ void event_trigger_reset_filter(struct event_command *cmd_ops,
|
|||
* @cmd_ops: The event_command operations for the trigger
|
||||
* @file: The event file for the trigger's event
|
||||
* @glob: The trigger command string, with optional remove(!) operator
|
||||
* @cmd: The cmd string
|
||||
* @param: The param string
|
||||
* @trigger_data: The trigger_data for the trigger
|
||||
* @n_registered: optional outparam, the number of triggers registered
|
||||
*
|
||||
* Register an event trigger. The @cmd_ops are used to call the
|
||||
* cmd_ops->reg() function which actually does the registration. The
|
||||
* cmd_ops->reg() function returns the number of triggers registered,
|
||||
* which is assigned to n_registered, if n_registered is non-NULL.
|
||||
* cmd_ops->reg() function which actually does the registration.
|
||||
*
|
||||
* Return: 0 on success, errno otherwise
|
||||
*/
|
||||
int event_trigger_register(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob,
|
||||
char *cmd,
|
||||
char *param,
|
||||
struct event_trigger_data *trigger_data,
|
||||
int *n_registered)
|
||||
struct event_trigger_data *trigger_data)
|
||||
{
|
||||
int ret;
|
||||
return cmd_ops->reg(glob, trigger_data, file);
|
||||
}
|
||||
|
||||
if (n_registered)
|
||||
*n_registered = 0;
|
||||
|
||||
ret = cmd_ops->reg(glob, trigger_data, file);
|
||||
/*
|
||||
* The above returns on success the # of functions enabled,
|
||||
* but if it didn't find any functions it returns zero.
|
||||
* Consider no functions a failure too.
|
||||
*/
|
||||
if (!ret) {
|
||||
cmd_ops->unreg(glob, trigger_data, file);
|
||||
ret = -ENOENT;
|
||||
} else if (ret > 0) {
|
||||
if (n_registered)
|
||||
*n_registered = ret;
|
||||
/* Just return zero, not the number of enabled functions */
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
/**
|
||||
* event_trigger_unregister - unregister an event trigger
|
||||
* @cmd_ops: The event_command operations for the trigger
|
||||
* @file: The event file for the trigger's event
|
||||
* @glob: The trigger command string, with optional remove(!) operator
|
||||
* @trigger_data: The trigger_data for the trigger
|
||||
*
|
||||
* Unregister an event trigger. The @cmd_ops are used to call the
|
||||
* cmd_ops->unreg() function which actually does the unregistration.
|
||||
*/
|
||||
void event_trigger_unregister(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob,
|
||||
struct event_trigger_data *trigger_data)
|
||||
{
|
||||
cmd_ops->unreg(glob, trigger_data, file);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -981,7 +964,7 @@ int event_trigger_register(struct event_command *cmd_ops,
|
|||
* @file: The trace_event_file associated with the event
|
||||
* @glob: The raw string used to register the trigger
|
||||
* @cmd: The cmd portion of the string used to register the trigger
|
||||
* @param: The params portion of the string used to register the trigger
|
||||
* @param_and_filter: The param and filter portion of the string used to register the trigger
|
||||
*
|
||||
* Common implementation for event command parsing and trigger
|
||||
* instantiation.
|
||||
|
@ -994,94 +977,53 @@ int event_trigger_register(struct event_command *cmd_ops,
|
|||
static int
|
||||
event_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
char *glob, char *cmd, char *param_and_filter)
|
||||
{
|
||||
struct event_trigger_data *trigger_data;
|
||||
struct event_trigger_ops *trigger_ops;
|
||||
char *trigger = NULL;
|
||||
char *number;
|
||||
char *param, *filter;
|
||||
bool remove;
|
||||
int ret;
|
||||
|
||||
/* separate the trigger from the filter (t:n [if filter]) */
|
||||
if (param && isdigit(param[0])) {
|
||||
trigger = strsep(¶m, " \t");
|
||||
if (param) {
|
||||
param = skip_spaces(param);
|
||||
if (!*param)
|
||||
param = NULL;
|
||||
}
|
||||
}
|
||||
remove = event_trigger_check_remove(glob);
|
||||
|
||||
trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
|
||||
ret = event_trigger_separate_filter(param_and_filter, ¶m, &filter, false);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = -ENOMEM;
|
||||
trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
|
||||
trigger_data = event_trigger_alloc(cmd_ops, cmd, param, file);
|
||||
if (!trigger_data)
|
||||
goto out;
|
||||
|
||||
trigger_data->count = -1;
|
||||
trigger_data->ops = trigger_ops;
|
||||
trigger_data->cmd_ops = cmd_ops;
|
||||
trigger_data->private_data = file;
|
||||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
INIT_LIST_HEAD(&trigger_data->named_list);
|
||||
|
||||
if (glob[0] == '!') {
|
||||
cmd_ops->unreg(glob+1, trigger_data, file);
|
||||
if (remove) {
|
||||
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
|
||||
kfree(trigger_data);
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (trigger) {
|
||||
number = strsep(&trigger, ":");
|
||||
ret = event_trigger_parse_num(param, trigger_data);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (!strlen(number))
|
||||
goto out_free;
|
||||
|
||||
/*
|
||||
* We use the callback data field (which is a pointer)
|
||||
* as our counter.
|
||||
*/
|
||||
ret = kstrtoul(number, 0, &trigger_data->count);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
if (!param) /* if param is non-empty, it's supposed to be a filter */
|
||||
goto out_reg;
|
||||
|
||||
if (!cmd_ops->set_filter)
|
||||
goto out_reg;
|
||||
|
||||
ret = cmd_ops->set_filter(param, trigger_data, file);
|
||||
ret = event_trigger_set_filter(cmd_ops, file, filter, trigger_data);
|
||||
if (ret < 0)
|
||||
goto out_free;
|
||||
|
||||
out_reg:
|
||||
/* Up the trigger_data count to make sure reg doesn't free it on failure */
|
||||
event_trigger_init(trigger_ops, trigger_data);
|
||||
ret = cmd_ops->reg(glob, trigger_data, file);
|
||||
/*
|
||||
* The above returns on success the # of functions enabled,
|
||||
* but if it didn't find any functions it returns zero.
|
||||
* Consider no functions a failure too.
|
||||
*/
|
||||
if (!ret) {
|
||||
cmd_ops->unreg(glob, trigger_data, file);
|
||||
ret = -ENOENT;
|
||||
} else if (ret > 0)
|
||||
ret = 0;
|
||||
event_trigger_init(trigger_data);
|
||||
|
||||
ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
/* Down the counter of trigger_data or free it if not used anymore */
|
||||
event_trigger_free(trigger_ops, trigger_data);
|
||||
event_trigger_free(trigger_data);
|
||||
out:
|
||||
return ret;
|
||||
|
||||
out_free:
|
||||
if (cmd_ops->set_filter)
|
||||
cmd_ops->set_filter(NULL, trigger_data, NULL);
|
||||
event_trigger_reset_filter(cmd_ops, trigger_data);
|
||||
kfree(trigger_data);
|
||||
goto out;
|
||||
}
|
||||
|
@ -1401,16 +1343,14 @@ traceoff_count_trigger(struct event_trigger_data *data,
|
|||
}
|
||||
|
||||
static int
|
||||
traceon_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
traceon_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
||||
{
|
||||
return event_trigger_print("traceon", m, (void *)data->count,
|
||||
data->filter_str);
|
||||
}
|
||||
|
||||
static int
|
||||
traceoff_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
traceoff_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
||||
{
|
||||
return event_trigger_print("traceoff", m, (void *)data->count,
|
||||
data->filter_str);
|
||||
|
@ -1521,8 +1461,7 @@ register_snapshot_trigger(char *glob,
|
|||
}
|
||||
|
||||
static int
|
||||
snapshot_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
snapshot_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
||||
{
|
||||
return event_trigger_print("snapshot", m, (void *)data->count,
|
||||
data->filter_str);
|
||||
|
@ -1617,8 +1556,7 @@ stacktrace_count_trigger(struct event_trigger_data *data,
|
|||
}
|
||||
|
||||
static int
|
||||
stacktrace_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
stacktrace_trigger_print(struct seq_file *m, struct event_trigger_data *data)
|
||||
{
|
||||
return event_trigger_print("stacktrace", m, (void *)data->count,
|
||||
data->filter_str);
|
||||
|
@ -1708,7 +1646,6 @@ event_enable_count_trigger(struct event_trigger_data *data,
|
|||
}
|
||||
|
||||
int event_enable_trigger_print(struct seq_file *m,
|
||||
struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
@ -1733,8 +1670,7 @@ int event_enable_trigger_print(struct seq_file *m,
|
|||
return 0;
|
||||
}
|
||||
|
||||
void event_enable_trigger_free(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data)
|
||||
void event_enable_trigger_free(struct event_trigger_data *data)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
|
@ -1781,39 +1717,33 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
|
|||
|
||||
int event_enable_trigger_parse(struct event_command *cmd_ops,
|
||||
struct trace_event_file *file,
|
||||
char *glob, char *cmd, char *param)
|
||||
char *glob, char *cmd, char *param_and_filter)
|
||||
{
|
||||
struct trace_event_file *event_enable_file;
|
||||
struct enable_trigger_data *enable_data;
|
||||
struct event_trigger_data *trigger_data;
|
||||
struct event_trigger_ops *trigger_ops;
|
||||
struct trace_array *tr = file->tr;
|
||||
char *param, *filter;
|
||||
bool enable, remove;
|
||||
const char *system;
|
||||
const char *event;
|
||||
bool hist = false;
|
||||
char *trigger;
|
||||
char *number;
|
||||
bool enable;
|
||||
int ret;
|
||||
|
||||
remove = event_trigger_check_remove(glob);
|
||||
|
||||
if (event_trigger_empty_param(param_and_filter))
|
||||
return -EINVAL;
|
||||
|
||||
ret = event_trigger_separate_filter(param_and_filter, ¶m, &filter, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
system = strsep(¶m, ":");
|
||||
if (!param)
|
||||
return -EINVAL;
|
||||
|
||||
/* separate the trigger from the filter (s:e:n [if filter]) */
|
||||
trigger = strsep(¶m, " \t");
|
||||
if (!trigger)
|
||||
return -EINVAL;
|
||||
if (param) {
|
||||
param = skip_spaces(param);
|
||||
if (!*param)
|
||||
param = NULL;
|
||||
}
|
||||
|
||||
system = strsep(&trigger, ":");
|
||||
if (!trigger)
|
||||
return -EINVAL;
|
||||
|
||||
event = strsep(&trigger, ":");
|
||||
event = strsep(¶m, ":");
|
||||
|
||||
ret = -EINVAL;
|
||||
event_enable_file = find_event_file(tr, system, event);
|
||||
|
@ -1829,32 +1759,24 @@ int event_enable_trigger_parse(struct event_command *cmd_ops,
|
|||
#else
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
#endif
|
||||
trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
|
||||
|
||||
ret = -ENOMEM;
|
||||
trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
|
||||
if (!trigger_data)
|
||||
goto out;
|
||||
|
||||
enable_data = kzalloc(sizeof(*enable_data), GFP_KERNEL);
|
||||
if (!enable_data) {
|
||||
kfree(trigger_data);
|
||||
if (!enable_data)
|
||||
goto out;
|
||||
}
|
||||
|
||||
trigger_data->count = -1;
|
||||
trigger_data->ops = trigger_ops;
|
||||
trigger_data->cmd_ops = cmd_ops;
|
||||
INIT_LIST_HEAD(&trigger_data->list);
|
||||
RCU_INIT_POINTER(trigger_data->filter, NULL);
|
||||
|
||||
enable_data->hist = hist;
|
||||
enable_data->enable = enable;
|
||||
enable_data->file = event_enable_file;
|
||||
trigger_data->private_data = enable_data;
|
||||
|
||||
if (glob[0] == '!') {
|
||||
cmd_ops->unreg(glob+1, trigger_data, file);
|
||||
trigger_data = event_trigger_alloc(cmd_ops, cmd, param, enable_data);
|
||||
if (!trigger_data) {
|
||||
kfree(enable_data);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (remove) {
|
||||
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
|
||||
kfree(trigger_data);
|
||||
kfree(enable_data);
|
||||
ret = 0;
|
||||
|
@ -1862,35 +1784,16 @@ int event_enable_trigger_parse(struct event_command *cmd_ops,
|
|||
}
|
||||
|
||||
/* Up the trigger_data count to make sure nothing frees it on failure */
|
||||
event_trigger_init(trigger_ops, trigger_data);
|
||||
event_trigger_init(trigger_data);
|
||||
|
||||
if (trigger) {
|
||||
number = strsep(&trigger, ":");
|
||||
ret = event_trigger_parse_num(param, trigger_data);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (!strlen(number))
|
||||
goto out_free;
|
||||
|
||||
/*
|
||||
* We use the callback data field (which is a pointer)
|
||||
* as our counter.
|
||||
*/
|
||||
ret = kstrtoul(number, 0, &trigger_data->count);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
if (!param) /* if param is non-empty, it's supposed to be a filter */
|
||||
goto out_reg;
|
||||
|
||||
if (!cmd_ops->set_filter)
|
||||
goto out_reg;
|
||||
|
||||
ret = cmd_ops->set_filter(param, trigger_data, file);
|
||||
ret = event_trigger_set_filter(cmd_ops, file, filter, trigger_data);
|
||||
if (ret < 0)
|
||||
goto out_free;
|
||||
|
||||
out_reg:
|
||||
/* Don't let event modules unload while probe registered */
|
||||
ret = trace_event_try_get_ref(event_enable_file->event_call);
|
||||
if (!ret) {
|
||||
|
@ -1901,32 +1804,23 @@ int event_enable_trigger_parse(struct event_command *cmd_ops,
|
|||
ret = trace_event_enable_disable(event_enable_file, 1, 1);
|
||||
if (ret < 0)
|
||||
goto out_put;
|
||||
ret = cmd_ops->reg(glob, trigger_data, file);
|
||||
/*
|
||||
* The above returns on success the # of functions enabled,
|
||||
* but if it didn't find any functions it returns zero.
|
||||
* Consider no functions a failure too.
|
||||
*/
|
||||
if (!ret) {
|
||||
ret = -ENOENT;
|
||||
|
||||
ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
|
||||
if (ret)
|
||||
goto out_disable;
|
||||
} else if (ret < 0)
|
||||
goto out_disable;
|
||||
/* Just return zero, not the number of enabled functions */
|
||||
ret = 0;
|
||||
event_trigger_free(trigger_ops, trigger_data);
|
||||
|
||||
event_trigger_free(trigger_data);
|
||||
out:
|
||||
return ret;
|
||||
|
||||
out_disable:
|
||||
trace_event_enable_disable(event_enable_file, 0, 1);
|
||||
out_put:
|
||||
trace_event_put_ref(event_enable_file->event_call);
|
||||
out_free:
|
||||
if (cmd_ops->set_filter)
|
||||
cmd_ops->set_filter(NULL, trigger_data, NULL);
|
||||
event_trigger_free(trigger_ops, trigger_data);
|
||||
event_trigger_reset_filter(cmd_ops, trigger_data);
|
||||
event_trigger_free(trigger_data);
|
||||
kfree(enable_data);
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -1953,19 +1847,18 @@ int event_enable_register_trigger(char *glob,
|
|||
}
|
||||
|
||||
if (data->ops->init) {
|
||||
ret = data->ops->init(data->ops, data);
|
||||
ret = data->ops->init(data);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_add_rcu(&data->list, &file->triggers);
|
||||
ret++;
|
||||
|
||||
update_cond_flag(file);
|
||||
if (trace_event_trigger_enable_disable(file, 1) < 0) {
|
||||
ret = trace_event_trigger_enable_disable(file, 1);
|
||||
if (ret < 0) {
|
||||
list_del_rcu(&data->list);
|
||||
update_cond_flag(file);
|
||||
ret--;
|
||||
}
|
||||
out:
|
||||
return ret;
|
||||
|
@ -1976,19 +1869,18 @@ void event_enable_unregister_trigger(char *glob,
|
|||
struct trace_event_file *file)
|
||||
{
|
||||
struct enable_trigger_data *test_enable_data = test->private_data;
|
||||
struct event_trigger_data *data = NULL, *iter;
|
||||
struct enable_trigger_data *enable_data;
|
||||
struct event_trigger_data *data;
|
||||
bool unregistered = false;
|
||||
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
list_for_each_entry(data, &file->triggers, list) {
|
||||
enable_data = data->private_data;
|
||||
list_for_each_entry(iter, &file->triggers, list) {
|
||||
enable_data = iter->private_data;
|
||||
if (enable_data &&
|
||||
(data->cmd_ops->trigger_type ==
|
||||
(iter->cmd_ops->trigger_type ==
|
||||
test->cmd_ops->trigger_type) &&
|
||||
(enable_data->file == test_enable_data->file)) {
|
||||
unregistered = true;
|
||||
data = iter;
|
||||
list_del_rcu(&data->list);
|
||||
trace_event_trigger_enable_disable(file, 0);
|
||||
update_cond_flag(file);
|
||||
|
@ -1996,8 +1888,8 @@ void event_enable_unregister_trigger(char *glob,
|
|||
}
|
||||
}
|
||||
|
||||
if (unregistered && data->ops->free)
|
||||
data->ops->free(data->ops, data);
|
||||
if (data && data->ops->free)
|
||||
data->ops->free(data);
|
||||
}
|
||||
|
||||
static struct event_trigger_ops *
|
||||
|
|
|
@ -1907,25 +1907,18 @@ core_initcall(init_kprobe_trace_early);
|
|||
static __init int init_kprobe_trace(void)
|
||||
{
|
||||
int ret;
|
||||
struct dentry *entry;
|
||||
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("kprobe_events", TRACE_MODE_WRITE,
|
||||
NULL, NULL, &kprobe_events_ops);
|
||||
|
||||
/* Event list interface */
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'kprobe_events' entry\n");
|
||||
trace_create_file("kprobe_events", TRACE_MODE_WRITE,
|
||||
NULL, NULL, &kprobe_events_ops);
|
||||
|
||||
/* Profile interface */
|
||||
entry = tracefs_create_file("kprobe_profile", TRACE_MODE_READ,
|
||||
NULL, NULL, &kprobe_profile_ops);
|
||||
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'kprobe_profile' entry\n");
|
||||
trace_create_file("kprobe_profile", TRACE_MODE_READ,
|
||||
NULL, NULL, &kprobe_profile_ops);
|
||||
|
||||
setup_boot_kprobe_events();
|
||||
|
||||
|
|
|
@ -1578,11 +1578,27 @@ static enum hrtimer_restart timerlat_irq(struct hrtimer *timer)
|
|||
|
||||
trace_timerlat_sample(&s);
|
||||
|
||||
notify_new_max_latency(diff);
|
||||
if (osnoise_data.stop_tracing) {
|
||||
if (time_to_us(diff) >= osnoise_data.stop_tracing) {
|
||||
|
||||
/*
|
||||
* At this point, if stop_tracing is set and <= print_stack,
|
||||
* print_stack is set and would be printed in the thread handler.
|
||||
*
|
||||
* Thus, print the stack trace as it is helpful to define the
|
||||
* root cause of an IRQ latency.
|
||||
*/
|
||||
if (osnoise_data.stop_tracing <= osnoise_data.print_stack) {
|
||||
timerlat_save_stack(0);
|
||||
timerlat_dump_stack(time_to_us(diff));
|
||||
}
|
||||
|
||||
if (osnoise_data.stop_tracing)
|
||||
if (time_to_us(diff) >= osnoise_data.stop_tracing)
|
||||
osnoise_stop_tracing();
|
||||
notify_new_max_latency(diff);
|
||||
|
||||
return HRTIMER_NORESTART;
|
||||
}
|
||||
}
|
||||
|
||||
wake_up_process(tlat->kthread);
|
||||
|
||||
|
|
|
@ -692,7 +692,7 @@ static LIST_HEAD(ftrace_event_list);
|
|||
|
||||
static int trace_search_list(struct list_head **list)
|
||||
{
|
||||
struct trace_event *e;
|
||||
struct trace_event *e = NULL, *iter;
|
||||
int next = __TRACE_LAST_TYPE;
|
||||
|
||||
if (list_empty(&ftrace_event_list)) {
|
||||
|
@ -704,9 +704,11 @@ static int trace_search_list(struct list_head **list)
|
|||
* We used up all possible max events,
|
||||
* lets see if somebody freed one.
|
||||
*/
|
||||
list_for_each_entry(e, &ftrace_event_list, list) {
|
||||
if (e->type != next)
|
||||
list_for_each_entry(iter, &ftrace_event_list, list) {
|
||||
if (iter->type != next) {
|
||||
e = iter;
|
||||
break;
|
||||
}
|
||||
next++;
|
||||
}
|
||||
|
||||
|
@ -714,7 +716,10 @@ static int trace_search_list(struct list_head **list)
|
|||
if (next > TRACE_EVENT_TYPE_MAX)
|
||||
return 0;
|
||||
|
||||
*list = &e->list;
|
||||
if (e)
|
||||
*list = &e->list;
|
||||
else
|
||||
*list = &ftrace_event_list;
|
||||
return next;
|
||||
}
|
||||
|
||||
|
@ -778,9 +783,8 @@ int register_trace_event(struct trace_event *event)
|
|||
|
||||
list_add_tail(&event->list, list);
|
||||
|
||||
} else if (event->type > __TRACE_LAST_TYPE) {
|
||||
printk(KERN_WARNING "Need to add type to trace.h\n");
|
||||
WARN_ON(1);
|
||||
} else if (WARN(event->type > __TRACE_LAST_TYPE,
|
||||
"Need to add type to trace.h")) {
|
||||
goto out;
|
||||
} else {
|
||||
/* Is this event already used */
|
||||
|
@ -1571,13 +1575,8 @@ __init static int init_events(void)
|
|||
|
||||
for (i = 0; events[i]; i++) {
|
||||
event = events[i];
|
||||
|
||||
ret = register_trace_event(event);
|
||||
if (!ret) {
|
||||
printk(KERN_WARNING "event %d failed to register\n",
|
||||
event->type);
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
WARN_ONCE(!ret, "event %d failed to register", event->type);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -224,12 +224,9 @@ static const struct file_operations recursed_functions_fops = {
|
|||
|
||||
__init static int create_recursed_functions(void)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
|
||||
dentry = trace_create_file("recursed_functions", TRACE_MODE_WRITE,
|
||||
NULL, NULL, &recursed_functions_fops);
|
||||
if (!dentry)
|
||||
pr_warn("WARNING: Failed to create recursed_functions\n");
|
||||
trace_create_file("recursed_functions", TRACE_MODE_WRITE,
|
||||
NULL, NULL, &recursed_functions_fops);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -895,6 +895,9 @@ trace_selftest_startup_function_graph(struct tracer *trace,
|
|||
ret = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Enable tracing on all functions again */
|
||||
ftrace_set_global_filter(NULL, 0, 1);
|
||||
#endif
|
||||
|
||||
/* Don't test dynamic tracing, the function tracer already did */
|
||||
|
|
|
@ -154,7 +154,7 @@ print_syscall_enter(struct trace_iterator *iter, int flags,
|
|||
goto end;
|
||||
|
||||
/* parameter types */
|
||||
if (tr->trace_flags & TRACE_ITER_VERBOSE)
|
||||
if (tr && tr->trace_flags & TRACE_ITER_VERBOSE)
|
||||
trace_seq_printf(s, "%s ", entry->types[i]);
|
||||
|
||||
/* parameter values */
|
||||
|
@ -296,9 +296,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
|
|||
struct trace_event_file *trace_file;
|
||||
struct syscall_trace_enter *entry;
|
||||
struct syscall_metadata *sys_data;
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer;
|
||||
unsigned int trace_ctx;
|
||||
struct trace_event_buffer fbuffer;
|
||||
unsigned long args[6];
|
||||
int syscall_nr;
|
||||
int size;
|
||||
|
@ -321,20 +319,16 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
|
|||
|
||||
size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
|
||||
event = trace_event_buffer_lock_reserve(&buffer, trace_file,
|
||||
sys_data->enter_event->event.type, size, trace_ctx);
|
||||
if (!event)
|
||||
entry = trace_event_buffer_reserve(&fbuffer, trace_file, size);
|
||||
if (!entry)
|
||||
return;
|
||||
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry = ring_buffer_event_data(fbuffer.event);
|
||||
entry->nr = syscall_nr;
|
||||
syscall_get_arguments(current, regs, args);
|
||||
memcpy(entry->args, args, sizeof(unsigned long) * sys_data->nb_args);
|
||||
|
||||
event_trigger_unlock_commit(trace_file, buffer, event, entry,
|
||||
trace_ctx);
|
||||
trace_event_buffer_commit(&fbuffer);
|
||||
}
|
||||
|
||||
static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
|
||||
|
@ -343,9 +337,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
|
|||
struct trace_event_file *trace_file;
|
||||
struct syscall_trace_exit *entry;
|
||||
struct syscall_metadata *sys_data;
|
||||
struct ring_buffer_event *event;
|
||||
struct trace_buffer *buffer;
|
||||
unsigned int trace_ctx;
|
||||
struct trace_event_buffer fbuffer;
|
||||
int syscall_nr;
|
||||
|
||||
syscall_nr = trace_get_syscall_nr(current, regs);
|
||||
|
@ -364,20 +356,15 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
|
|||
if (!sys_data)
|
||||
return;
|
||||
|
||||
trace_ctx = tracing_gen_ctx();
|
||||
|
||||
event = trace_event_buffer_lock_reserve(&buffer, trace_file,
|
||||
sys_data->exit_event->event.type, sizeof(*entry),
|
||||
trace_ctx);
|
||||
if (!event)
|
||||
entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry));
|
||||
if (!entry)
|
||||
return;
|
||||
|
||||
entry = ring_buffer_event_data(event);
|
||||
entry = ring_buffer_event_data(fbuffer.event);
|
||||
entry->nr = syscall_nr;
|
||||
entry->ret = syscall_get_return_value(current, regs);
|
||||
|
||||
event_trigger_unlock_commit(trace_file, buffer, event, entry,
|
||||
trace_ctx);
|
||||
trace_event_buffer_commit(&fbuffer);
|
||||
}
|
||||
|
||||
static int reg_event_syscall_enter(struct trace_event_file *file,
|
||||
|
|
|
@ -1045,7 +1045,8 @@ static void sort_secondary(struct tracing_map *map,
|
|||
/**
|
||||
* tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
|
||||
* @map: The tracing_map
|
||||
* @sort_key: The sort key to use for sorting
|
||||
* @sort_keys: The sort key to use for sorting
|
||||
* @n_sort_keys: hitcount, always have at least one
|
||||
* @sort_entries: outval: pointer to allocated and sorted array of entries
|
||||
*
|
||||
* tracing_map_sort_entries() sorts the current set of entries in the
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
/crc32table.h
|
||||
/crc64table.h
|
||||
/default.bconf
|
||||
/gen_crc32table
|
||||
/gen_crc64table
|
||||
/oid_registry_data.c
|
||||
|
|
10
lib/Makefile
10
lib/Makefile
|
@ -281,7 +281,15 @@ $(foreach file, $(libfdt_files), \
|
|||
$(eval CFLAGS_$(file) = -I $(srctree)/scripts/dtc/libfdt))
|
||||
lib-$(CONFIG_LIBFDT) += $(libfdt_files)
|
||||
|
||||
lib-$(CONFIG_BOOT_CONFIG) += bootconfig.o
|
||||
obj-$(CONFIG_BOOT_CONFIG) += bootconfig.o
|
||||
obj-$(CONFIG_BOOT_CONFIG_EMBED) += bootconfig-data.o
|
||||
|
||||
$(obj)/bootconfig-data.o: $(obj)/default.bconf
|
||||
|
||||
targets += default.bconf
|
||||
filechk_defbconf = cat $(or $(real-prereqs), /dev/null)
|
||||
$(obj)/default.bconf: $(CONFIG_BOOT_CONFIG_EMBED_FILE) FORCE
|
||||
$(call filechk,defbconf)
|
||||
|
||||
obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o
|
||||
obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Embed default bootconfig in the kernel.
|
||||
*/
|
||||
.section .init.rodata, "aw"
|
||||
.global embedded_bootconfig_data
|
||||
embedded_bootconfig_data:
|
||||
.incbin "lib/default.bconf"
|
||||
.global embedded_bootconfig_data_end
|
||||
embedded_bootconfig_data_end:
|
|
@ -12,6 +12,19 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/string.h>
|
||||
|
||||
#ifdef CONFIG_BOOT_CONFIG_EMBED
|
||||
/* embedded_bootconfig_data is defined in bootconfig-data.S */
|
||||
extern __visible const char embedded_bootconfig_data[];
|
||||
extern __visible const char embedded_bootconfig_data_end[];
|
||||
|
||||
const char * __init xbc_get_embedded_bootconfig(size_t *size)
|
||||
{
|
||||
*size = embedded_bootconfig_data_end - embedded_bootconfig_data;
|
||||
return (*size) ? embedded_bootconfig_data : NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
#else /* !__KERNEL__ */
|
||||
/*
|
||||
* NOTE: This is only for tools/bootconfig, because tools/bootconfig will
|
||||
|
|
Loading…
Reference in New Issue