[SRU][N:gke][PATCH 031/106] arm64/mm: implement pte_batch_hint()
Tim Whisonant
tim.whisonant at canonical.com
Mon Jul 21 16:21:14 UTC 2025
From: Ryan Roberts <ryan.roberts at arm.com>
BugLink: https://bugs.launchpad.net/bugs/2059316
BugLink: https://bugs.launchpad.net/bugs/2117098
When core code iterates over a range of ptes and calls ptep_get() for each
of them, if the range happens to cover contpte mappings, the number of pte
reads becomes amplified by a factor of the number of PTEs in a contpte
block. This is because for each call to ptep_get(), the implementation
must read all of the ptes in the contpte block to which it belongs to
gather the access and dirty bits.
This causes a hotspot for fork(), as well as operations that unmap memory
such as munmap(), exit and madvise(MADV_DONTNEED). Fortunately we can fix
this by implementing pte_batch_hint() which allows their iterators to skip
getting the contpte tail ptes when gathering the batch of ptes to operate
on. This results in the number of PTE reads returning to 1 per pte.
Link: https://lkml.kernel.org/r/20240215103205.2607016-17-ryan.roberts@arm.com
Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
Acked-by: Mark Rutland <mark.rutland at arm.com>
Reviewed-by: David Hildenbrand <david at redhat.com>
Tested-by: John Hubbard <jhubbard at nvidia.com>
Acked-by: Catalin Marinas <catalin.marinas at arm.com>
Cc: Alistair Popple <apopple at nvidia.com>
Cc: Andrey Ryabinin <ryabinin.a.a at gmail.com>
Cc: Ard Biesheuvel <ardb at kernel.org>
Cc: Barry Song <21cnbao at gmail.com>
Cc: Borislav Petkov (AMD) <bp at alien8.de>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: James Morse <james.morse at arm.com>
Cc: Kefeng Wang <wangkefeng.wang at huawei.com>
Cc: Marc Zyngier <maz at kernel.org>
Cc: Matthew Wilcox (Oracle) <willy at infradead.org>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Will Deacon <will at kernel.org>
Cc: Yang Shi <shy828301 at gmail.com>
Cc: Zi Yan <ziy at nvidia.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
(cherry picked from commit fb5451e5f72b31002760083a99fbb41771c4f1ad)
Signed-off-by: dann frazier <dann.frazier at canonical.com>
Acked-by: Brad Figg <bfigg at nvidia.com>
Acked-by: Noah Wager <noah.wager at canonical.com>
Acked-by: Jacob Martin <jacob.martin at canonical.com>
Signed-off-by: Brad Figg <bfigg at nvidia.com>
Signed-off-by: Tim Whisonant <tim.whisonant at canonical.com>
---
arch/arm64/include/asm/pgtable.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index a8f1a35e30867..d759a20d2929a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1213,6 +1213,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr,
__contpte_try_unfold(mm, addr, ptep, pte);
}
+#define pte_batch_hint pte_batch_hint
+static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte)
+{
+ if (!pte_valid_cont(pte))
+ return 1;
+
+ return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1));
+}
+
/*
* The below functions constitute the public API that arm64 presents to the
* core-mm to manipulate PTE entries within their page tables (or at least this
--
2.43.0
More information about the kernel-team
mailing list