[SRU][F][PATCH 0/1] CVE-2024-26689
Massimiliano Pellizzer
massimiliano.pellizzer at canonical.com
Fri Mar 14 11:23:26 UTC 2025
https://ubuntu.com/security/CVE-2024-26689
[ Impact ]
ceph: prevent use-after-free in encode_cap_msg()
In fs/ceph/caps.c, in encode_cap_msg(), "use after free" error was
caught by KASAN at this line - 'ceph_buffer_get(arg->xattr_buf);'. This
implies before the refcount could be increment here, it was freed.
In same file, in "handle_cap_grant()" refcount is decremented by this
line - 'ceph_buffer_put(ci->i_xattrs.blob);'. It appears that a race
occurred and resource was freed by the latter line before the former
line could increment it.
encode_cap_msg() is called by __send_cap() and __send_cap() is called by
ceph_check_caps() after calling __prep_cap(). __prep_cap() is where
arg->xattr_buf is assigned to ci->i_xattrs.blob. This is the spot where
the refcount must be increased to prevent "use after free" error.
[ Fix ]
Oracular: Not affected
Noble: Not affected
Jammy: Fixed via upstream stable updates (LP: #2059014)
Focal: Backported from mainline
[ Test Plan ]
Compile and boot tested.
Stress tested a single node Ceph installation:
$ sudo snap install microceph
$ sudo microceph cluster bootstrap
$ sudo microceph.ceph osd crush rule rm replicated_rule
$ sudo microceph.ceph osd crush rule create-replicated single default osd
$ sudo microceph disk add /dev/sdb --wipe
$ sudo microceph.ceph config set global osd_pool_default_size 1
$ sudo microceph.ceph osd pool create cephfs_metadata 8
$ sudo microceph.ceph osd pool create cephfs_data 8
$ sudo microceph.ceph fs new cephfs cephfs_metadata cephfs_data
$ sudo microceph.ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' osd 'allow *' mgr 'allow *'
$ sudo mkdir -p /mnt/cephfs
$ sudo mount -t ceph $(hostname -I | awk '{print $1}'):6789:/ /mnt/cephfs -o name=admin,secret=xxx
$ mount
...
10.xx.xx.xx:6789:/ on /mnt/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl)
$ cd /mnt/cephfs
$ sudo stress-ng --hdd 8 --hdd-ops 500000 --timeout 5m --metrics-brief
stress-ng: info: [2826] dispatching hogs: 8 hdd
stress-ng: info: [2826] successful run completed in 31.34s
$ sudo stress-ng --dir 8 --dir-ops 500000 --timeout 5m --metrics-brief
stress-ng: info: [2844] dispatching hogs: 8 dir
stress-ng: info: [2844] successful run completed in 168.83s (2 mins, 48.83 secs)
$ sudo stress-ng --xattr 8 --xattr-ops 500000 --timeout 5m --metrics-brief
stress-ng: info: [2863] dispatching hogs: 8 xattr
stress-ng: info: [2863] successful run completed in 300.19s (5 mins, 0.19 secs)
[ Where Problems Could Occur ]
The fix affects the Ceph filesystem. An issue with this fix may lead
to improper reference counting of extended attribute buffers.
A user might experience problems such as filesystem inconsistencies,
unexpected kernel crashes, data corruption, or failures when accessing
or modifyinng files on Ceph.
More information about the kernel-team
mailing list