[3.5.y.z extended stable] Patch "sched, rt: fix isolated CPUs leaving root_task_group indefinitely" has been added to staging queue
Luis Henriques
luis.henriques at canonical.com
Fri Feb 14 12:55:07 UTC 2014
This is a note to let you know that I have just added a patch titled
sched,rt: fix isolated CPUs leaving root_task_group indefinitely
to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.5.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Luis
------
>From 2d2dc4140983fa8087daa3e98a722267b5f0556c Mon Sep 17 00:00:00 2001
From: Mike Galbraith <efault at gmx.de>
Date: Tue, 7 Aug 2012 10:02:38 +0200
Subject: sched,rt: fix isolated CPUs leaving root_task_group indefinitely
throttled
commit e221d028bb08b47e624c5f0a31732c642db9d19a upstream.
Root task group bandwidth replenishment must service all CPUs, regardless of
where the timer was last started, and regardless of the isolation mechanism,
lest 'Quoth the Raven, "Nevermore"' become rt scheduling policy.
Signed-off-by: Mike Galbraith <efault at gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
Link: http://lkml.kernel.org/r/1344326558.6968.25.camel@marge.simpson.net
Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
Cc: Li Zefan <lizefan at huawei.com>
Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
---
kernel/sched/rt.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 8480912..06c3c6f 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -788,6 +788,19 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
const struct cpumask *span;
span = sched_rt_period_mask();
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * FIXME: isolated CPUs should really leave the root task group,
+ * whether they are isolcpus or were isolated via cpusets, lest
+ * the timer run on a CPU which does not service all runqueues,
+ * potentially leaving other CPUs indefinitely throttled. If
+ * isolation is really required, the user will turn the throttle
+ * off to kill the perturbations it causes anyway. Meanwhile,
+ * this maintains functionality for boot and/or troubleshooting.
+ */
+ if (rt_b == &root_task_group.rt_bandwidth)
+ span = cpu_online_mask;
+#endif
for_each_cpu(i, span) {
int enqueue = 0;
struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
--
1.8.3.2
More information about the kernel-team
mailing list