[ 3.8.y.z extended stable ] Patch "mm: fix the TLB range flushed when __tlb_remove_page() runs out of" has been added to staging queue

Kamal Mostafa kamal at canonical.com
Thu Aug 29 19:48:59 UTC 2013


This is a note to let you know that I have just added a patch titled

    mm: fix the TLB range flushed when __tlb_remove_page() runs out of

to the linux-3.8.y-queue branch of the 3.8.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y-queue

This patch is scheduled to be released in version 3.8.13.8.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.8.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

------

>From ee8527ee61132d06937b47fa0f8db47103291ecd Mon Sep 17 00:00:00 2001
From: Vineet Gupta <Vineet.Gupta1 at synopsys.com>
Date: Wed, 3 Jul 2013 15:03:31 -0700
Subject: mm: fix the TLB range flushed when __tlb_remove_page() runs out of
 slots

commit e6c495a96ce02574e765d5140039a64c8d4e8c9e upstream.

zap_pte_range loops from @addr to @end.  In the middle, if it runs out of
batching slots, TLB entries needs to be flushed for @start to @interim,
NOT @interim to @end.

Since ARC port doesn't use page free batching I can't test it myself but
this seems like the right thing to do.

Observed this when working on a fix for the issue at thread:
http://www.spinics.net/lists/linux-arch/msg21736.html

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
Cc: Mel Gorman <mgorman at suse.de>
Cc: Hugh Dickins <hughd at google.com>
Cc: Rik van Riel <riel at redhat.com>
Cc: David Rientjes <rientjes at google.com>
Cc: Peter Zijlstra <peterz at infradead.org>
Acked-by: Catalin Marinas <catalin.marinas at arm.com>
Cc: Max Filippov <jcmvbkbc at gmail.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
[ kamal: 3.8.y-stable prereq for:
  2b04725 Fix TLB gather virtual address range invalidation corner cases ]
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 mm/memory.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 32a495a..b81825a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1106,6 +1106,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	spinlock_t *ptl;
 	pte_t *start_pte;
 	pte_t *pte;
+	unsigned long range_start = addr;

 again:
 	init_rss_vec(rss);
@@ -1211,12 +1212,14 @@ again:
 		force_flush = 0;

 #ifdef HAVE_GENERIC_MMU_GATHER
-		tlb->start = addr;
-		tlb->end = end;
+		tlb->start = range_start;
+		tlb->end = addr;
 #endif
 		tlb_flush_mmu(tlb);
-		if (addr != end)
+		if (addr != end) {
+			range_start = addr;
 			goto again;
+		}
 	}

 	return addr;
--
1.8.1.2





More information about the kernel-team mailing list