[Bug 2078906] Re: Prevent race condition when printing Inode in ll_sync_inode
Seyeong Kim
2078906 at bugs.launchpad.net
Thu Sep 25 12:51:14 UTC 2025
As the symptom was not clearly observable in the operating environment,
the following verification was performed.
# juju status
Model Controller Cloud/Region Version SLA Timestamp
ceph maas-default maas/default 2.9.52 unsupported 12:10:29Z
App Version Status Scale Charm Channel Rev Exposed Message
ceph-fs 17.2.7 active 1 ceph-fs quincy/stable 194 no Unit is ready
ceph-mon 17.2.7 active 1 ceph-mon quincy/stable 388 no Unit is ready and clustered
ceph-osd 17.2.7 active 3 ceph-osd quincy/stable 753 no Unit is ready (1 OSD)
ubuntu active 3 ubuntu stable 26 no
Unit Workload Agent Machine Public address Ports Message
ceph-fs/0* active idle 7 10.0.0.142 Unit is ready
ceph-mon/0* active idle 3 10.0.0.138 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.128 Unit is ready (1 OSD)
ceph-osd/1* active idle 1 10.0.0.129 Unit is ready (1 OSD)
ceph-osd/2 active idle 2 10.0.0.130 Unit is ready (1 OSD)
ubuntu/0* active idle 4 10.0.0.139
ubuntu/1 active idle 5 10.0.0.140
ubuntu/2 active idle 6 10.0.0.141
Machine State Address Inst id Series AZ Message
0 started 10.0.0.128 node-28 jammy default Deployed
1 started 10.0.0.129 node-29 jammy default Deployed
2 started 10.0.0.130 node-30 jammy default Deployed
3 started 10.0.0.138 node-38 jammy default Deployed
4 started 10.0.0.139 node-39 jammy default Deployed
5 started 10.0.0.140 node-40 jammy default Deployed
6 started 10.0.0.141 node-41 jammy default Deployed
7 started 10.0.0.142 node-42 jammy default Deployed
The ceph-mon/0 unit was running both the Ceph monitor and an NFS-Ganesha setup.
On ubuntu/0,1,2, the NFS-Ganesha export was mounted at /mnt/nfs.
root at node-39:/mnt/nfs# mount
...
10.0.0.138:/cephfs on /mnt/nfs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,har
d,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.139,local_lock=none,addr=10.0.0.138)
root at node-39:/mnt/nfs# ls
stress.sh stressdir test
After running a large number of operations (touch, chmod, and others
[1]), ubuntu/0,1,2 became unresponsive after several minutes. The `ls`
command no longer worked under /mnt/nfs on any of the nodes. I had to
reboot 3 nodes.
I then upgraded Ceph using the proposed packages. Because ceph-mon and
NFS-Ganesha were running on the same node, I upgraded all Ceph
components together.
I then ran the same script for over 15 minutes, and it worked without
issues.
I believe this can be considered a proper verification.
ceph-mon
ubuntu at node-38:~$ dpkg -l | grep ceph
ii ceph 17.2.9-0ubuntu0.22.04.1 amd64 distributed storage and file system
ii ceph-base 17.2.9-0ubuntu0.22.04.1 amd64 common ceph daemon libraries and management tools
ii ceph-common 17.2.9-0ubuntu0.22.04.1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 17.2.9-0ubuntu0.22.04.1 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 17.2.9-0ubuntu0.22.04.1 amd64 manager for the ceph distributed file system
ii ceph-mgr-modules-core 17.2.9-0ubuntu0.22.04.1 all ceph manager modules which are always enabled
ii ceph-mon 17.2.9-0ubuntu0.22.04.1 amd64 monitor server for the ceph storage system
ii ceph-osd 17.2.9-0ubuntu0.22.04.1 amd64 OSD server for the ceph storage system
ii ceph-volume 17.2.9-0ubuntu0.22.04.1 all tool to facilidate OSD deployment
ii libcephfs2 17.2.9-0ubuntu0.22.04.1 amd64 Ceph distributed file system client library
ii libsqlite3-mod-ceph 17.2.9-0ubuntu0.22.04.1 amd64 SQLite3 VFS for Ceph
ii nfs-ganesha-ceph:amd64 3.5-1ubuntu1 amd64 nfs-ganesha fsal ceph libraries
ii python3-ceph-argparse 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.9-0ubuntu0.22.04.1 all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 libraries for the Ceph libcephfs library
ceph-osd 1,2,3
ubuntu at node-28:~$ dpkg -l | grep ceph
ii ceph 17.2.9-0ubuntu0.22.04.1 amd64 distributed storage and file system
ii ceph-base 17.2.9-0ubuntu0.22.04.1 amd64 common ceph daemon libraries and management tools
ii ceph-common 17.2.9-0ubuntu0.22.04.1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 17.2.9-0ubuntu0.22.04.1 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 17.2.9-0ubuntu0.22.04.1 amd64 manager for the ceph distributed file system
ii ceph-mgr-modules-core 17.2.9-0ubuntu0.22.04.1 all ceph manager modules which are always enabled
ii ceph-mon 17.2.9-0ubuntu0.22.04.1 amd64 monitor server for the ceph storage system
ii ceph-osd 17.2.9-0ubuntu0.22.04.1 amd64 OSD server for the ceph storage system
ii ceph-volume 17.2.9-0ubuntu0.22.04.1 all tool to facilidate OSD deployment
ii libcephfs2 17.2.9-0ubuntu0.22.04.1 amd64 Ceph distributed file system client library
ii libsqlite3-mod-ceph 17.2.9-0ubuntu0.22.04.1 amd64 SQLite3 VFS for Ceph
ii python3-ceph-argparse 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.9-0ubuntu0.22.04.1 all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 libraries for the Ceph libcephfs library
ceph-fs
ubuntu at node-42:~$ dpkg -l | grep ceph
ii ceph-base 17.2.9-0ubuntu0.22.04.1 amd64 common ceph daemon libraries and management tools
ii ceph-common 17.2.9-0ubuntu0.22.04.1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-mds 17.2.9-0ubuntu0.22.04.1 amd64 metadata server for the ceph distributed file system
ii libcephfs2 17.2.9-0ubuntu0.22.04.1 amd64 Ceph distributed file system client library
ii python3-ceph-argparse 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.9-0ubuntu0.22.04.1 all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.9-0ubuntu0.22.04.1 amd64 Python 3 libraries for the Ceph libcephfs library
root at node-38:/home/ubuntu# ceph status
cluster:
id: 29be4eca-99eb-11f0-a698-b14e1333ce91
health: HEALTH_WARN
1 pool(s) do not have an application enabled
services:
mon: 1 daemons, quorum node-38 (age 13m)
mgr: node-38(active, since 13m)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 11m), 3 in (since 3h)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 693 objects, 192 MiB
usage: 4.3 GiB used, 131 GiB / 135 GiB avail
pgs: 113 active+clean
io:
client: 2.0 MiB/s wr, 0 op/s rd, 173 op/s wr
[1]
# script
#!/usr/bin/env bash
set -euo pipefail
# config
: "${MOUNT:=/mnt/nfs}"
: "${STRESSDIR:=${MOUNT}/stressdir}"
: "${N_LINKS:=20000}"
ts() { date '+%F %T'; }
log() { printf '%s %s\n' "$(ts)" "$*"; }
need_cmds() {
command -v dd >/dev/null || { echo "dd missing"; exit 1; }
command -v truncate >/dev/null || { echo "truncate missing"; exit 1; }
command -v shuf >/dev/null || { echo "shuf missing"; exit 1; }
command -v setfattr >/dev/null || true
}
init() {
log "[init] STRESSDIR=$STRESSDIR N_LINKS=$N_LINKS"
mkdir -p "$STRESSDIR"
cd "$STRESSDIR"
# cleanup
pkill -f "dd if=/dev/zero of=${STRESSDIR}/seed" 2>/dev/null || true
pkill -f "truncate -s 0 ${STRESSDIR}/seed" 2>/dev/null || true
find . -maxdepth 1 -type f -name 'h_*' -print0 | xargs -0 -r rm -f
: > seed
log "[init] creating $N_LINKS hardlinks..."
for ((i=1;i<=N_LINKS;i++)); do
ln seed "h_${i}" 2>/dev/null || true
done
log "[init] done"
ulimit -c unlimited || true
}
run() {
cd "$STRESSDIR"
log "[run] starting workers"
# write stress
( while true; do
dd if=/dev/zero of=seed bs=1M count=64 oflag=direct conv=notrunc status=none 2>/dev/null || true
done ) &
# size stress
( while true; do
truncate -s 0 seed 2>/dev/null || true
truncate -s 104857600 seed 2>/dev/null || true
done ) &
# perm stress
( while true; do
chmod 0600 seed 2>/dev/null || true
chmod 0644 seed 2>/dev/null || true
done ) &
# mtime stress
( while true; do
touch -m seed 2>/dev/null || true
done ) &
# xattr stress
if command -v setfattr >/dev/null; then
( while true; do
setfattr -n user.t -v "$(date +%s%N)" seed 2>/dev/null || true
done ) &
fi
# link stress
for j in $(seq 1 64); do
(
while true; do
for k in $(shuf -i 1-"$N_LINKS" -n 1000); do
touch -m "h_${k}" 2>/dev/null || true
chmod 0600 "h_${k}" 2>/dev/null || true
chmod 0644 "h_${k}" 2>/dev/null || true
if command -v setfattr >/dev/null; then
setfattr -n user.t -v "$RANDOM" "h_${k}" 2>/dev/null || true
fi
done
done
) &
done
log "[run] workers started (pids: $(jobs -p | xargs echo))"
}
stop() {
pkill -f "dd if=/dev/zero of=${STRESSDIR}/seed" 2>/dev/null || true
pkill -f "truncate -s 0 ${STRESSDIR}/seed" 2>/dev/null || true
pkill -f "chmod 0600 seed" 2>/dev/null || true
pkill -f "chmod 0644 seed" 2>/dev/null || true
pkill -f "touch -m seed" 2>/dev/null || true
pkill -f "setfattr -n user.t" 2>/dev/null || true
pkill -f "shuf -i 1-${N_LINKS} -n 1000" 2>/dev/null || true
log "[stop] workers stopped"
}
clean() {
stop
cd "$STRESSDIR" 2>/dev/null || exit 0
find . -maxdepth 1 -type f -name 'h_*' -print0 | xargs -0 -r rm -f
rm -f seed
log "[clean] done"
}
status() {
echo "STRESSDIR=$STRESSDIR"
echo "Running PIDs:"; pgrep -fa 'dd if=/dev/zero|truncate -s|touch -m|chmod 06|setfattr|shuf -i' || true
echo "Status: $(date '+%F %T')"
}
CMD="${1:-}"
case "$CMD" in
init) need_cmds; init ;;
run) run ;;
stop) stop ;;
clean) clean ;;
status) status ;;
*)
echo "Usage: $0 [init|run|stop|clean|status]"
echo " (env) MOUNT=$MOUNT N_LINKS=$N_LINKS STRESSDIR=$STRESSDIR"
exit 1
;;
esac
** Tags removed: verification-needed verification-needed-jammy
** Tags added: verification-done verification-done-jammy
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/2078906
Title:
Prevent race condition when printing Inode in ll_sync_inode
Status in ceph package in Ubuntu:
In Progress
Status in ceph source package in Focal:
Won't Fix
Status in ceph source package in Jammy:
Fix Committed
Status in ceph source package in Noble:
In Progress
Status in ceph source package in Oracular:
Won't Fix
Status in ceph source package in Plucky:
In Progress
Bug description:
[Impact]
In the ll_sync_inode function, the entire Inode structure is printed without holding a lock, which may lead to the following core trace:
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140705682900544) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140705682900544) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140705682900544, signo=signo at entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffa92094476 in __GI_raise (sig=sig at entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffa9207a7f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffa910783c3 in ceph::__ceph_assert_fail (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>) at ./src/common/assert.cc:75
#6 0x00007ffa91078525 in ceph::__ceph_assert_fail (ctx=...) at ./src/common/assert.cc:80
#7 0x00007ffa7049f602 in xlist<ObjectCacher::Object*>::size (this=0x7ffa20734638, this=0x7ffa20734638) at ./src/include/xlist.h:87
#8 operator<< (os=..., out=warning: RTTI symbol not found for class 'StackStringStream<4096ul>'
...) at ./src/osdc/ObjectCacher.h:760
#9 operator<< (out=warning: RTTI symbol not found for class 'StackStringStream<4096ul>'
..., in=...) at ./src/client/Inode.cc:80
#10 0x00007ffa7045545f in Client::ll_sync_inode (this=0x55958b8a5c60, in=in at entry=0x7ffa20734270, syncdataonly=syncdataonly at entry=false) at ./src/client/Client.cc:14717
#11 0x00007ffa703d0f75 in ceph_ll_sync_inode (cmount=cmount at entry=0x55958b0bd0d0, in=in at entry=0x7ffa20734270, syncdataonly=syncdataonly at entry=0) at ./src/libcephfs.cc:1865
#12 0x00007ffa9050ddc5 in fsal_ceph_ll_setattr (creds=<optimized out>, mask=<optimized out>, stx=0x7ff8983f25a0, i=<optimized out>, cmount=<optimized out>)
at ./src/FSAL/FSAL_CEPH/statx_compat.h:209
#13 ceph_fsal_setattr2 (obj_hdl=0x7fecc8fefbe0, bypass=<optimized out>, state=<optimized out>, attrib_set=0x7ff8983f2830) at ./src/FSAL/FSAL_CEPH/handle.c:2410
#14 0x00007ffa92371da0 in mdcache_setattr2 (obj_hdl=0x7fecc9e98778, bypass=<optimized out>, state=0x7fef0d64c9b0, attrs=0x7ff8983f2830)
at ../FSAL/Stackable_FSALs/FSAL_MDCACHE/./src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1012
#15 0x00007ffa922b2bbc in fsal_setattr (obj=0x7fecc9e98778, bypass=<optimized out>, state=0x7fef0d64c9b0, attr=0x7ff8983f2830) at ./src/FSAL/fsal_helper.c:573
#16 0x00007ffa9234c7bd in nfs4_op_setattr (op=0x7fecad7ac510, data=0x7fecac314a10, resp=0x7fecad1be200) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_op_setattr.c:212
#17 0x00007ffa9232e413 in process_one_op (data=data at entry=0x7fecac314a10, status=status at entry=0x7ff8983f2a2c) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_Compound.c:920
#18 0x00007ffa9232f9e0 in nfs4_Compound (arg=<optimized out>, req=0x7fecad491620, res=0x7fecac054580) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_Compound.c:1327
#19 0x00007ffa922cb0ff in nfs_rpc_process_request (reqdata=0x7fecad491620) at ./src/MainNFSD/nfs_worker_thread.c:1508
#20 0x00007ffa92029be7 in svc_request (xprt=0x7fed640504d0, xdrs=<optimized out>) at ./src/svc_rqst.c:1202
#21 0x00007ffa9202df9a in svc_rqst_xprt_task_recv (wpe=<optimized out>) at ./src/svc_rqst.c:1183
#22 0x00007ffa9203344d in svc_rqst_epoll_loop (wpe=0x559594308e60) at ./src/svc_rqst.c:1564
#23 0x00007ffa920389e1 in work_pool_thread (arg=0x7feeb802ea10) at ./src/work_pool.c:184
#24 0x00007ffa920e6b43 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#25 0x00007ffa92178a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Upon further analysis of the call trace using GDB, both the _front and _back member variables in xlist<ObjectCacher::Object*> are set to zero, yet an assertion failure is still triggered.
(gdb) frame 7
#7 0x00007ffa7049f602 in xlist<ObjectCacher::Object*>::size (this=0x7ffa20734638, this=0x7ffa20734638) at ./src/include/xlist.h:87
87 ./src/include/xlist.h: No such file or directory.
(gdb) p *this
$1 = {_front = 0x0, _back = 0x0, _size = 0}
(gdb) frame 6
#6 0x00007ffa91078525 in ceph::__ceph_assert_fail (ctx=...) at ./src/common/assert.cc:80
80 ./src/common/assert.cc: No such file or directory.
(gdb) p ctx
$2 = (const ceph::assert_data &) @0x7ffa70587900: {assertion = 0x7ffa70530598 "(bool)_front == (bool)_size", file = 0x7ffa705305b4 "./src/include/xlist.h", line = 87,
function = 0x7ffa7053b410 "size_t xlist<T>::size() const [with T = ObjectCacher::Object*; size_t = long unsigned int]"}
A race condition occurred, leading to abnormal behavior in the
judgment.
[Fix]
It may not be necessary to print the entire Inode structure; simply printing the inode number should be sufficient.
There is an upstream commit that fixes this issue:
commit 2b78a5b3147d4e97be332ca88d286aec0ce44dc3
Author: Chengen Du <chengen.du at canonical.com>
Date: Mon Aug 12 18:17:37 2024 +0800
client: Prevent race condition when printing Inode in
ll_sync_inode
In the ll_sync_inode function, the entire Inode structure is printed without
holding a lock. This can lead to a race condition when evaluating the assertion
in xlist<ObjectCacher::Object*>::size(), resulting in abnormal behavior.
Fixes: https://tracker.ceph.com/issues/67491
Co-authored-by: dongdong tao <tdd21151186 at gmail.com>
Signed-off-by: Chengen Du <chengen.du at canonical.com>
[Test Plan]
The race condition might be challenging to reproduce, but we can test to ensure that the normal call path functions correctly.
1. Create a Manila share and mount it locally
openstack share type create nfs_share_type False --description "NFS share type"
openstack share create --share-type nfs_share_type --name my_share NFS 1
openstack share list
openstack share access create my_share ip XX.XX.XX.XX/XX
openstack share show my_share
sudo mount -t nfs <-export_location_path-> <-mountpoint->
2. Create a file and change its permissions, ensuring that all functions work correctly without any errors
touch test
chmod 755 test
[Where problems could occur]
The patch only modifies the log to prevent a race condition.
However, if there are any issues with the patch, it could disrupt Ceph's ll_sync_inode functionality, which is utilized when setting attributes on a manila share via the NFS protocol.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2078906/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list