[3.13.y.z extended stable] Patch "aio: protect reqs_available updates from changes in interrupt handlers" has been added to staging queue
Kamal Mostafa
kamal at canonical.com
Wed Aug 6 20:54:52 UTC 2014
This is a note to let you know that I have just added a patch titled
aio: protect reqs_available updates from changes in interrupt handlers
to the linux-3.13.y-queue branch of the 3.13.y.z extended stable tree
which can be found at:
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.13.y-queue
This patch is scheduled to be released in version 3.13.11.6.
If you, or anyone else, feels it should not be added to this tree, please
reply to this email.
For more information about the 3.13.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable
Thanks.
-Kamal
------
>From c4835bc20ca905e399f9c2f90b492712ed2e0cd5 Mon Sep 17 00:00:00 2001
From: Benjamin LaHaise <bcrl at kvack.org>
Date: Mon, 14 Jul 2014 12:49:26 -0400
Subject: aio: protect reqs_available updates from changes in interrupt
handlers
commit 263782c1c95bbddbb022dc092fd89a36bb8d5577 upstream.
As of commit f8567a3845ac05bb28f3c1b478ef752762bd39ef it is now possible to
have put_reqs_available() called from irq context. While put_reqs_available()
is per cpu, it did not protect itself from interrupts on the same CPU. This
lead to aio_complete() corrupting the available io requests count when run
under a heavy O_DIRECT workloads as reported by Robert Elliott. Fix this by
disabling irq updates around the per cpu batch updates of reqs_available.
Many thanks to Robert and folks for testing and tracking this down.
Reported-by: Robert Elliot <Elliott at hp.com>
Tested-by: Robert Elliot <Elliott at hp.com>
Signed-off-by: Benjamin LaHaise <bcrl at kvack.org>
Cc: Jens Axboe <axboe at kernel.dk>, Christoph Hellwig <hch at infradead.org>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
fs/aio.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/fs/aio.c b/fs/aio.c
index 19e7d95..a0d9e43 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -816,16 +816,20 @@ void exit_aio(struct mm_struct *mm)
static void put_reqs_available(struct kioctx *ctx, unsigned nr)
{
struct kioctx_cpu *kcpu;
+ unsigned long flags;
preempt_disable();
kcpu = this_cpu_ptr(ctx->cpu);
+ local_irq_save(flags);
kcpu->reqs_available += nr;
+
while (kcpu->reqs_available >= ctx->req_batch * 2) {
kcpu->reqs_available -= ctx->req_batch;
atomic_add(ctx->req_batch, &ctx->reqs_available);
}
+ local_irq_restore(flags);
preempt_enable();
}
@@ -833,10 +837,12 @@ static bool get_reqs_available(struct kioctx *ctx)
{
struct kioctx_cpu *kcpu;
bool ret = false;
+ unsigned long flags;
preempt_disable();
kcpu = this_cpu_ptr(ctx->cpu);
+ local_irq_save(flags);
if (!kcpu->reqs_available) {
int old, avail = atomic_read(&ctx->reqs_available);
@@ -855,6 +861,7 @@ static bool get_reqs_available(struct kioctx *ctx)
ret = true;
kcpu->reqs_available--;
out:
+ local_irq_restore(flags);
preempt_enable();
return ret;
}
--
1.9.1
More information about the kernel-team
mailing list