aboutsummaryrefslogtreecommitdiff
path: root/kernel/locking/rtmutex_api.c
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2024-10-04 14:46:58 +0200
committerPeter Zijlstra <peterz@infradead.org>2024-11-05 12:55:38 +0100
commit7c70cb94d29cd325fabe4a818c18613e3b9919a1 (patch)
tree2c4d6b58f7e284778060e2aff70db2b87d7a977c /kernel/locking/rtmutex_api.c
parent26baa1f1c4bdc34b8d698c1900b407d863ad0e69 (diff)
downloadlinux-7c70cb94d29cd325fabe4a818c18613e3b9919a1.tar.gz
linux-7c70cb94d29cd325fabe4a818c18613e3b9919a1.tar.bz2
linux-7c70cb94d29cd325fabe4a818c18613e3b9919a1.zip
sched: Add Lazy preemption model
Change fair to use resched_curr_lazy(), which, when the lazy preemption model is selected, will set TIF_NEED_RESCHED_LAZY. This LAZY bit will be promoted to the full NEED_RESCHED bit on tick. As such, the average delay between setting LAZY and actually rescheduling will be TICK_NSEC/2. In short, Lazy preemption will delay preemption for fair class but will function as Full preemption for all the other classes, most notably the realtime (RR/FIFO/DEADLINE) classes. The goal is to bridge the performance gap with Voluntary, such that we might eventually remove that option entirely. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20241007075055.331243614@infradead.org
Diffstat (limited to 'kernel/locking/rtmutex_api.c')
0 files changed, 0 insertions, 0 deletions