[SRU][N/O:linux-intel][PATCH 2/4] mm: create promo_wmark_pages and clean up open-coded sites

Thibault Ferrante thibault.ferrante at canonical.com
Tue Mar 18 16:30:03 UTC 2025


From: Kaiyang Zhao <kaiyang2 at cs.cmu.edu>

BugLink: https://bugs.launchpad.net/bugs/2103530

Patch series "mm: print the promo watermark in zoneinfo", v2.

This patch (of 2):

Define promo_wmark_pages and convert current call sites of wmark_pages
with fixed WMARK_PROMO to using it instead.

Link: https://lkml.kernel.org/r/20240801232548.36604-1-kaiyang2@cs.cmu.edu
Link: https://lkml.kernel.org/r/20240801232548.36604-2-kaiyang2@cs.cmu.edu
Signed-off-by: Kaiyang Zhao <kaiyang2 at cs.cmu.edu>
Cc: Johannes Weiner <hannes at cmpxchg.org>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
(cherry picked from commit aa51b4f7d6e1f45205eb8cbfda1b9c14d8559409 https://github.com/intel/kernel-downstream)
Signed-off-by: Thibault Ferrante <thibault.ferrante at canonical.com>
---
 include/linux/mmzone.h | 1 +
 kernel/sched/fair.c    | 2 +-
 mm/vmscan.c            | 2 +-
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fd04c8e942250..e57665a6eba4a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -668,6 +668,7 @@ enum zone_watermarks {
 #define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost)
 #define low_wmark_pages(z) (z->_watermark[WMARK_LOW] + z->watermark_boost)
 #define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost)
+#define promo_wmark_pages(z) (z->_watermark[WMARK_PROMO] + z->watermark_boost)
 #define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost)
 
 /*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 27d123f1b69bf..70e7c3c1ed762 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1742,7 +1742,7 @@ static bool pgdat_free_space_enough(struct pglist_data *pgdat)
 			continue;
 
 		if (zone_watermark_ok(zone, 0,
-				      wmark_pages(zone, WMARK_PROMO) + enough_wmark,
+				      promo_wmark_pages(zone) + enough_wmark,
 				      ZONE_MOVABLE, 0))
 			return true;
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 20fcdffb9cf16..669ac38d8a0ab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6681,7 +6681,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 			continue;
 
 		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)
-			mark = wmark_pages(zone, WMARK_PROMO);
+			mark = promo_wmark_pages(zone);
 		else
 			mark = high_wmark_pages(zone);
 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
-- 
2.45.2




More information about the kernel-team mailing list