[SRU][N:gke][PATCH 012/106] mm/memory: pass PTE to copy_present_pte()
Tim Whisonant
tim.whisonant at canonical.com
Mon Jul 21 16:20:55 UTC 2025
From: David Hildenbrand <david at redhat.com>
BugLink: https://bugs.launchpad.net/bugs/2059316
BugLink: https://bugs.launchpad.net/bugs/2117098
We already read it, let's just forward it.
This patch is based on work by Ryan Roberts.
[david at redhat.com: fix the hmm "exclusive_cow" selftest]
Link: https://lkml.kernel.org/r/13f296b8-e882-47fd-b939-c2141dc28717@redhat.com
Link: https://lkml.kernel.org/r/20240129124649.189745-13-david@redhat.com
Signed-off-by: David Hildenbrand <david at redhat.com>
Reviewed-by: Ryan Roberts <ryan.roberts at arm.com>
Reviewed-by: Mike Rapoport (IBM) <rppt at kernel.org>
Cc: Albert Ou <aou at eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev at linux.ibm.com>
Cc: Alexandre Ghiti <alexghiti at rivosinc.com>
Cc: Aneesh Kumar K.V <aneesh.kumar at kernel.org>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Christian Borntraeger <borntraeger at linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy at csgroup.eu>
Cc: David S. Miller <davem at davemloft.net>
Cc: Dinh Nguyen <dinguyen at kernel.org>
Cc: Gerald Schaefer <gerald.schaefer at linux.ibm.com>
Cc: Heiko Carstens <hca at linux.ibm.com>
Cc: Matthew Wilcox <willy at infradead.org>
Cc: Michael Ellerman <mpe at ellerman.id.au>
Cc: Naveen N. Rao <naveen.n.rao at linux.ibm.com>
Cc: Nicholas Piggin <npiggin at gmail.com>
Cc: Palmer Dabbelt <palmer at dabbelt.com>
Cc: Paul Walmsley <paul.walmsley at sifive.com>
Cc: Russell King (Oracle) <linux at armlinux.org.uk>
Cc: Sven Schnelle <svens at linux.ibm.com>
Cc: Vasily Gorbik <gor at linux.ibm.com>
Cc: Will Deacon <will at kernel.org>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
(cherry picked from commit 53723298ba436830fdf0744c19b57b2a18f44041)
Signed-off-by: dann frazier <dann.frazier at canonical.com>
Acked-by: Brad Figg <bfigg at nvidia.com>
Acked-by: Noah Wager <noah.wager at canonical.com>
Acked-by: Jacob Martin <jacob.martin at canonical.com>
Signed-off-by: Brad Figg <bfigg at nvidia.com>
Signed-off-by: Tim Whisonant <tim.whisonant at canonical.com>
---
mm/memory.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 78886f7110a2c..ef0806da1df13 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -959,10 +959,9 @@ static inline void __copy_present_pte(struct vm_area_struct *dst_vma,
*/
static inline int
copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
- pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
- struct folio **prealloc)
+ pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr,
+ int *rss, struct folio **prealloc)
{
- pte_t pte = ptep_get(src_pte);
struct page *page;
struct folio *folio;
@@ -1094,6 +1093,8 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
progress += 8;
continue;
}
+ ptent = ptep_get(src_pte);
+ VM_WARN_ON_ONCE(!pte_present(ptent));
/*
* Device exclusive entry restored, continue by copying
@@ -1103,7 +1104,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
}
/* copy_present_pte() will clear `*prealloc' if consumed */
ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte,
- addr, rss, &prealloc);
+ ptent, addr, rss, &prealloc);
/*
* If we need a pre-allocated page for this pte, drop the
* locks, allocate, and try again.
--
2.43.0
More information about the kernel-team
mailing list