ACK: [Precise][SRU][PATCH 1/1] mutex: Place lock in contended state after fastpath_lock failure
Colin Ian King
colin.king at canonical.com
Fri Aug 24 12:51:48 UTC 2012
On 24/08/12 11:47, Luis Henriques wrote:
> From: Will Deacon <will.deacon at arm.com>
>
> BugLink: http://bugs.launchpad.net/bugs/1041114
>
> ARM recently moved to asm-generic/mutex-xchg.h for its mutex
> implementation after the previous implementation was found to be missing
> some crucial memory barriers. However, this has revealed some problems
> running hackbench on SMP platforms due to the way in which the
> MUTEX_SPIN_ON_OWNER code operates.
>
> The symptoms are that a bunch of hackbench tasks are left waiting on an
> unlocked mutex and therefore never get woken up to claim it. This boils
> down to the following sequence of events:
>
> Task A Task B Task C Lock value
> 0 1
> 1 lock() 0
> 2 lock() 0
> 3 spin(A) 0
> 4 unlock() 1
> 5 lock() 0
> 6 cmpxchg(1,0) 0
> 7 contended() -1
> 8 lock() 0
> 9 spin(C) 0
> 10 unlock() 1
> 11 cmpxchg(1,0) 0
> 12 unlock() 1
>
> At this point, the lock is unlocked, but Task B is in an uninterruptible
> sleep with nobody to wake it up.
>
> This patch fixes the problem by ensuring we put the lock into the
> contended state if we fail to acquire it on the fastpath, ensuring that
> any blocked waiters are woken up when the mutex is released.
>
> Signed-off-by: Will Deacon <will.deacon at arm.com>
> Cc: Arnd Bergmann <arnd at arndb.de>
> Cc: Chris Mason <chris.mason at fusionio.com>
> Cc: Ingo Molnar <mingo at elte.hu>
> Cc: <stable at vger.kernel.org>
> Reviewed-by: Nicolas Pitre <nico at linaro.org>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
> Link: http://lkml.kernel.org/n/tip-6e9lrw2avczr0617fzl5vqb8@git.kernel.org
> Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
> (cherry picked from commit 0bce9c46bf3b15f485d82d7e81dabed6ebcc24b1)
>
> Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
> ---
> include/asm-generic/mutex-xchg.h | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/mutex-xchg.h b/include/asm-generic/mutex-xchg.h
> index 580a6d3..c04e0db 100644
> --- a/include/asm-generic/mutex-xchg.h
> +++ b/include/asm-generic/mutex-xchg.h
> @@ -26,7 +26,13 @@ static inline void
> __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
> {
> if (unlikely(atomic_xchg(count, 0) != 1))
> - fail_fn(count);
> + /*
> + * We failed to acquire the lock, so mark it contended
> + * to ensure that any waiting tasks are woken up by the
> + * unlock slow path.
> + */
> + if (likely(atomic_xchg(count, -1) != 1))
> + fail_fn(count);
> }
>
> /**
> @@ -43,7 +49,8 @@ static inline int
> __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
> {
> if (unlikely(atomic_xchg(count, 0) != 1))
> - return fail_fn(count);
> + if (likely(atomic_xchg(count, -1) != 1))
> + return fail_fn(count);
> return 0;
> }
>
>
More information about the kernel-team
mailing list