[SRU Precise] tcp: make challenge acks less predictable
Stefan Bader
stefan.bader at canonical.com
Fri Aug 12 15:35:38 UTC 2016
From: Eric Dumazet <edumazet at google.com>
commit 75ff39ccc1bd5d3c455b6822ab09e533c551f758 upstream.
Yue Cao claims that current host rate limiting of challenge ACKS
(RFC 5961) could leak enough information to allow a patient attacker
to hijack TCP sessions. He will soon provide details in an academic
paper.
This patch increases the default limit from 100 to 1000, and adds
some randomization so that the attacker can no longer hijack
sessions without spending a considerable amount of probes.
Based on initial analysis and patch from Linus.
Note that we also have per socket rate limiting, so it is tempting
to remove the host limit in the future.
v2: randomize the count of challenge acks per second, not the period.
Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2")
Reported-by: Yue Cao <ycao009 at ucr.edu>
Signed-off-by: Eric Dumazet <edumazet at google.com>
Suggested-by: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Yuchung Cheng <ycheng at google.com>
Cc: Neal Cardwell <ncardwell at google.com>
Acked-by: Neal Cardwell <ncardwell at google.com>
Acked-by: Yuchung Cheng <ycheng at google.com>
Signed-off-by: David S. Miller <davem at davemloft.net>
[bwh: Backported to 3.2:
- Adjust context
- Use ACCESS_ONCE() instead of {READ,WRITE}_ONCE()
- Open-code prandom_u32_max()]
Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
CVE-2016-5696
[smb: Picked from ff13c4bb5dfe5cd1bd75e2720d1f0aa2e6e81246 bwh-queue]
Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
---
net/ipv4/tcp_input.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 2cc1313..5380c00 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -86,7 +86,7 @@ int sysctl_tcp_adv_win_scale __read_mostly = 1;
EXPORT_SYMBOL(sysctl_tcp_adv_win_scale);
/* rfc5961 challenge ack rate limiting */
-int sysctl_tcp_challenge_ack_limit = 100;
+int sysctl_tcp_challenge_ack_limit = 1000;
int sysctl_tcp_stdurg __read_mostly;
int sysctl_tcp_rfc1337 __read_mostly;
@@ -3288,13 +3288,20 @@ static void tcp_send_challenge_ack(struct sock *sk)
/* unprotected vars, we dont care of overwrites */
static u32 challenge_timestamp;
static unsigned int challenge_count;
- u32 now = jiffies / HZ;
+ u32 count, now = jiffies / HZ;
if (now != challenge_timestamp) {
+ u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
+
challenge_timestamp = now;
- challenge_count = 0;
- }
- if (++challenge_count <= sysctl_tcp_challenge_ack_limit) {
+ ACCESS_ONCE(challenge_count) =
+ half + (u32)(
+ ((u64) random32() * sysctl_tcp_challenge_ack_limit)
+ >> 32);
+ }
+ count = ACCESS_ONCE(challenge_count);
+ if (count > 0) {
+ ACCESS_ONCE(challenge_count) = count - 1;
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
tcp_send_ack(sk);
}
--
1.9.1
More information about the kernel-team
mailing list