[PATCH 1/1] Revert "Revert "fork: defer linking file vma until vma is fully initialized""
Yuxuan Luo
yuxuan.luo at canonical.com
Thu Aug 29 19:33:29 UTC 2024
This reverts commit 22cfd78a5f58f72a37a1971af8633c50d7e8f468.
22cfd78a5f58 ("Revert "fork: defer linking file vma until vma is fully
initialized"") is pulled from linux-6.1.y branch 04b0c4191234, reverting
the linux-6.1.y backport commit 0c42f7e039ab ("fork: defer linking file
vma until vma is fully initialized"). However, since the source of the
reverted commit in Noble tree is the upstream branch rather than
linux-6.1.y, it is incorrect to revert it and also leave Noble
vulnerable to CVE-2024-27022. Revert the reverting commit to fix this
issue.
CVE-2024-27022
Signed-off-by: Yuxuan Luo <yuxuan.luo at canonical.com>
---
kernel/fork.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index 172fc8c09973c..92436fff039b2 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -717,6 +717,23 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
} else if (anon_vma_fork(tmp, mpnt))
goto fail_nomem_anon_vma_fork;
vm_flags_clear(tmp, VM_LOCKED_MASK);
+ /*
+ * Copy/update hugetlb private vma information.
+ */
+ if (is_vm_hugetlb_page(tmp))
+ hugetlb_dup_vma_private(tmp);
+
+ /*
+ * Link the vma into the MT. After using __mt_dup(), memory
+ * allocation is not necessary here, so it cannot fail.
+ */
+ vma_iter_bulk_store(&vmi, tmp);
+
+ mm->map_count++;
+
+ if (tmp->vm_ops && tmp->vm_ops->open)
+ tmp->vm_ops->open(tmp);
+
file = tmp->vm_file;
if (file) {
struct address_space *mapping = file->f_mapping;
@@ -733,25 +750,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
i_mmap_unlock_write(mapping);
}
- /*
- * Copy/update hugetlb private vma information.
- */
- if (is_vm_hugetlb_page(tmp))
- hugetlb_dup_vma_private(tmp);
-
- /*
- * Link the vma into the MT. After using __mt_dup(), memory
- * allocation is not necessary here, so it cannot fail.
- */
- vma_iter_bulk_store(&vmi, tmp);
-
- mm->map_count++;
if (!(tmp->vm_flags & VM_WIPEONFORK))
retval = copy_page_range(tmp, mpnt);
- if (tmp->vm_ops && tmp->vm_ops->open)
- tmp->vm_ops->open(tmp);
-
if (retval) {
mpnt = vma_next(&vmi);
goto loop_out;
--
2.34.1
More information about the kernel-team
mailing list