[SRU][Xenial][PATCH 1/2] s390/bitops: add for_each_set_bit_inv helper
Joseph Salisbury
joseph.salisbury at canonical.com
Thu Feb 15 21:28:03 UTC 2018
From: Heiko Carstens <heiko.carstens at de.ibm.com>
BugLink: http://bugs.launchpad.net/bugs/1744736
Same helper function like for_each_set_bit in generic code.
Signed-off-by: Heiko Carstens <heiko.carstens at de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky at de.ibm.com>
(cherry picked from commit 09214545c4a40943ecb6cedc511cd4bd709c85a6)
Signed-off-by: Joseph Salisbury <joseph.salisbury at canonical.com>
---
arch/s390/include/asm/bitops.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/s390/include/asm/bitops.h b/arch/s390/include/asm/bitops.h
index 8043f10..4edb008 100644
--- a/arch/s390/include/asm/bitops.h
+++ b/arch/s390/include/asm/bitops.h
@@ -301,6 +301,11 @@ unsigned long find_first_bit_inv(const unsigned long *addr, unsigned long size);
unsigned long find_next_bit_inv(const unsigned long *addr, unsigned long size,
unsigned long offset);
+#define for_each_set_bit_inv(bit, addr, size) \
+ for ((bit) = find_first_bit_inv((addr), (size)); \
+ (bit) < (size); \
+ (bit) = find_next_bit_inv((addr), (size), (bit) + 1))
+
static inline void set_bit_inv(unsigned long nr, volatile unsigned long *ptr)
{
return set_bit(nr ^ (BITS_PER_LONG - 1), ptr);
--
2.7.4
More information about the kernel-team
mailing list