[Lucid] SRU: request pull EC2 Revert "sched: update load count only once per cpu in 10 tick update window"
John Johansen
john.johansen at canonical.com
Fri Aug 20 18:01:43 UTC 2010
The following changes since commit 65316c3b7e1fccaae5f2c0cf5f98cd3220a5be64:
Stefan Bader (1):
UBUNTU: Ubuntu-2.6.32-308.15
are available in the git repository at:
git://kernel.ubuntu.com/jj/ubuntu-lucid ec2
John Johansen (1):
UBUNTU: SAUCE: Revert "sched: update load count only once per cpu in 10 tick update window"
kernel/sched.c | 24 ++----------------------
1 files changed, 2 insertions(+), 22 deletions(-)
------
UBUNTU: SAUCE: Revert "sched: update load count only once per cpu in 10 tick update window"
BugLink: http://bugs.launchpad.net/bugs/574910
SRU Justification:
Impact:
Fixes loadavg reporting on EC2.
Fix:
This reverts commit 0d843425672f4d2dc99b9004409aae503ef4d39f which fixes a bug in load
accounting when a tickless (no idle HZ) kernel is used. However the Xen patchset used
on EC2 is not tickless but the accounting modifications are still being done, resulting
in phantom load.
Testcase:
Start any Ubuntu Lucid based instance on EC2, let it idle while logging the load average.
while true ; do cat /proc/loadavg >>load.log ; sleep 5 ; done
Alternately simply run top or htop and monitor the load average.
Without the revert the reported load will vary from 0 up to about .5 for a clean image
with no extra tasks launched.
With the revert the load stays steady around 0 with only occasional small bump when
a background task is run.
Signed-off-by: John Johansen <john.johansen at canonical.com>
---
kernel/sched.c | 24 ++----------------------
1 files changed, 2 insertions(+), 22 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 31a25d0..cbb111e 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2964,7 +2964,6 @@ unsigned long this_cpu_load(void)
/* Variables and functions for calc_load */
static atomic_long_t calc_load_tasks;
-static atomic_long_t calc_load_tasks_deferred;
static unsigned long calc_load_update;
unsigned long avenrun[3];
EXPORT_SYMBOL(avenrun);
@@ -3019,7 +3018,7 @@ void calc_global_load(void)
*/
static void calc_load_account_active(struct rq *this_rq)
{
- long nr_active, delta, deferred;
+ long nr_active, delta;
nr_active = this_rq->nr_running;
nr_active += (long) this_rq->nr_uninterruptible;
@@ -3027,25 +3026,6 @@ static void calc_load_account_active(struct rq *this_rq)
if (nr_active != this_rq->calc_load_active) {
delta = nr_active - this_rq->calc_load_active;
this_rq->calc_load_active = nr_active;
-
- /*
- * Update calc_load_tasks only once per cpu in 10 tick update
- * window.
- */
- if (unlikely(time_before(jiffies, this_rq->calc_load_update) &&
- time_after_eq(jiffies, calc_load_update))) {
- if (delta)
- atomic_long_add(delta,
- &calc_load_tasks_deferred);
- return;
- }
-
- if (atomic_long_read(&calc_load_tasks_deferred)) {
- deferred = atomic_long_xchg(&calc_load_tasks_deferred,
- 0);
- delta += deferred;
- }
-
atomic_long_add(delta, &calc_load_tasks);
}
}
@@ -3089,8 +3069,8 @@ static void update_cpu_load(struct rq *this_rq)
}
if (time_after_eq(jiffies, this_rq->calc_load_update)) {
- calc_load_account_active(this_rq);
this_rq->calc_load_update += LOAD_FREQ;
+ calc_load_account_active(this_rq);
}
}
--
1.7.0.4
More information about the kernel-team
mailing list