[PATCH 138/241] dm: fix deadlock with request based dm and queue request_fn recursion
Herton Ronaldo Krzesinski
herton.krzesinski at canonical.com
Thu Dec 13 13:58:23 UTC 2012
3.5.7.2 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Jens Axboe <axboe at kernel.dk>
commit a8c32a5c98943d370ea606a2e7dc04717eb92206 upstream.
Request based dm attempts to re-run the request queue off the
request completion path. If used with a driver that potentially does
end_io from its request_fn, we could deadlock trying to recurse
back into request dispatch. Fix this by punting the request queue
run to kblockd.
Tested to fix a quickly reproducible deadlock in such a scenario.
Acked-by: Alasdair G Kergon <agk at redhat.com>
Signed-off-by: Jens Axboe <axboe at kernel.dk>
Signed-off-by: Herton Ronaldo Krzesinski <herton.krzesinski at canonical.com>
---
drivers/md/dm.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 9ff3019..32370ea 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -754,8 +754,14 @@ static void rq_completed(struct mapped_device *md, int rw, int run_queue)
if (!md_in_flight(md))
wake_up(&md->wait);
+ /*
+ * Run this off this callpath, as drivers could invoke end_io while
+ * inside their request_fn (and holding the queue lock). Calling
+ * back into ->request_fn() could deadlock attempting to grab the
+ * queue lock again.
+ */
if (run_queue)
- blk_run_queue(md->queue);
+ blk_run_queue_async(md->queue);
/*
* dm_put() must be at the end of this function. See the comment above
--
1.7.9.5
More information about the kernel-team
mailing list