diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-02-03 10:02:00 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-02-03 10:02:00 -0800 |
commit | dbc15d24f9fa6f25723ef750b65b98bfcd3d3910 (patch) | |
tree | 898b0bdeeaf84b5abff030caecbb74be0b4fef3e /kernel/trace/fgraph.c | |
parent | 54fe3ffef0ebb60b1273d0d7b047ee9b4723cc61 (diff) | |
parent | c8b186a8d54d7e12d28e9f9686cb00ff18fc2ab2 (diff) | |
download | linux-dbc15d24f9fa6f25723ef750b65b98bfcd3d3910.tar.gz linux-dbc15d24f9fa6f25723ef750b65b98bfcd3d3910.tar.bz2 linux-dbc15d24f9fa6f25723ef750b65b98bfcd3d3910.zip |
Merge tag 'trace-v5.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
- Initialize tracing-graph-pause at task creation, not start of
function tracing, to avoid corrupting the pause counter.
- Set "pause-on-trace" for latency tracers as that option breaks their
output (regression).
- Fix the wrong error return for setting kretprobes on future modules
(before they are loaded).
- Fix re-registering the same kretprobe.
- Add missing value check for added RCU variable reload.
* tag 'trace-v5.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracepoint: Fix race between tracing and removing tracepoint
kretprobe: Avoid re-registration of the same kretprobe earlier
tracing/kprobe: Fix to support kretprobe events on unloaded modules
tracing: Use pause-on-trace with the latency tracers
fgraph: Initialize tracing_graph_pause at task creation
Diffstat (limited to 'kernel/trace/fgraph.c')
-rw-r--r-- | kernel/trace/fgraph.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 73edb9e4f354..29a6ebeebc9e 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -394,7 +394,6 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) } if (t->ret_stack == NULL) { - atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); t->curr_ret_stack = -1; t->curr_ret_depth = -1; @@ -489,7 +488,6 @@ static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack); static void graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) { - atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; /* make curr_ret_stack visible before we add the ret_stack */ |