[SRU][P:linux][PATCH 1/1 v2] mm/page_alloc: fix deadlock on cpu_hotplug_lock in __accept_page()
Thibault Ferrante
thibault.ferrante at canonical.com
Thu May 15 16:49:33 UTC 2025
From: "Kirill A. Shutemov" <kirill.shutemov at linux.intel.com>
BugLink: https://bugs.launchpad.net/bugs/2109543
When the last page in the zone is accepted, __accept_page() calls
static_branch_dec(). This function takes cpu_hotplug_lock, which can lead
to a deadlock if the allocation occurs during CPU bringup path as
_cpu_up() also takes the lock.
To prevent this deadlock, defer static_branch_dec() to a workqueue.
Call static_branch_dec() only when the workqueue is not yet initialized.
Workqueues are initialized before CPU bring up, so this will not conflict
with the first scenario.
Link: https://lkml.kernel.org/r/20250329171030.3942298-1-kirill.shutemov@linux.intel.com
Fixes: 55ad43e8ba0f ("mm: add a helper to accept page")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
Reported-by: Srikanth Aithal <sraithal at amd.com>
Tested-by: Srikanth Aithal <sraithal at amd.com>
Cc: Dave Hansen <dave.hansen at intel.com>
Cc: Ashish Kalra <ashish.kalra at amd.com>
Cc: David Hildenbrand <david at redhat.com>
Cc: "Edgecombe, Rick P" <rick.p.edgecombe at intel.com>
Cc: Mel Gorman <mgorman at techsingularity.net>
Cc: "Mike Rapoport (IBM)" <rppt at kernel.org>
Cc: Thomas Lendacky <thomas.lendacky at amd.com>
Cc: Vlastimil Babka <vbabka at suse.cz>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
(cherry picked from commit 4067196a52278156d18d8d6fa7f43970611b1b49)
Signed-off-by: Thibault Ferrante <thibault.ferrante at canonical.com>
---
include/linux/mmzone.h | 3 +++
mm/internal.h | 1 +
mm/mm_init.c | 1 +
mm/page_alloc.c | 28 ++++++++++++++++++++++++++--
4 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9540b41894da..9027f751b619 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -964,6 +964,9 @@ struct zone {
#ifdef CONFIG_UNACCEPTED_MEMORY
/* Pages to be accepted. All pages on the list are MAX_PAGE_ORDER */
struct list_head unaccepted_pages;
+
+ /* To be called once the last page in the zone is accepted */
+ struct work_struct unaccepted_cleanup;
#endif
/* zone flags, see below */
diff --git a/mm/internal.h b/mm/internal.h
index 20b3535935a3..2742c601fe10 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1517,6 +1517,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
#ifdef CONFIG_UNACCEPTED_MEMORY
void accept_page(struct page *page);
+void unaccepted_cleanup_work(struct work_struct *work);
#else /* CONFIG_UNACCEPTED_MEMORY */
static inline void accept_page(struct page *page)
{
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 2630cc30147e..d5a51f65dc4d 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1404,6 +1404,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
#ifdef CONFIG_UNACCEPTED_MEMORY
INIT_LIST_HEAD(&zone->unaccepted_pages);
+ INIT_WORK(&zone->unaccepted_cleanup, unaccepted_cleanup_work);
#endif
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 542d25f77be8..50ffabed5dd9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6921,6 +6921,11 @@ static DEFINE_STATIC_KEY_FALSE(zones_with_unaccepted_pages);
static bool lazy_accept = true;
+void unaccepted_cleanup_work(struct work_struct *work)
+{
+ static_branch_dec(&zones_with_unaccepted_pages);
+}
+
static int __init accept_memory_parse(char *p)
{
if (!strcmp(p, "lazy")) {
@@ -6959,8 +6964,27 @@ static void __accept_page(struct zone *zone, unsigned long *flags,
__free_pages_ok(page, MAX_PAGE_ORDER, FPI_TO_TAIL);
- if (last)
- static_branch_dec(&zones_with_unaccepted_pages);
+ if (last) {
+ /*
+ * There are two corner cases:
+ *
+ * - If allocation occurs during the CPU bring up,
+ * static_branch_dec() cannot be used directly as
+ * it causes a deadlock on cpu_hotplug_lock.
+ *
+ * Instead, use schedule_work() to prevent deadlock.
+ *
+ * - If allocation occurs before workqueues are initialized,
+ * static_branch_dec() should be called directly.
+ *
+ * Workqueues are initialized before CPU bring up, so this
+ * will not conflict with the first scenario.
+ */
+ if (system_wq)
+ schedule_work(&zone->unaccepted_cleanup);
+ else
+ unaccepted_cleanup_work(&zone->unaccepted_cleanup);
+ }
}
void accept_page(struct page *page)
--
2.48.1
More information about the kernel-team
mailing list