[Bug 651846] Re: raid10 fails - "lost page write due to I/O error on md4" and " raid10_make_request bug: can't convert block across chunks or bigger than 128k 1623343324 20" - write fails, remote nfs mount of filesystem becomes unusable
Andrew Hately
651846 at bugs.launchpad.net
Fri Oct 1 07:23:15 UTC 2010
Following the above, I did this:
1) stop nfs server daemon, unmount /home, stop /dev/md4
2) make a new md4 with default chunk size (64KiB) but otherwise the same as above
3) made a new xfs filesystem on md4 using md3 as log device, as before
4) mount this as /home, export via nfs
5) restore 1.2TB of backed up data
6) copy several large files over the nfs mount
And there are no problems.
So the only significant difference: default chunk size OK, non-default chunk size fails.
I guess somewhere in the raid10 code an assumption is made about chunk size.
Some documentation of what I did:
mdadm -C /dev/md4 -v -n 6 -l 10 -p f2 -f /dev/sd[abcdef]3
mdadm: chunk size defaults to 64K
mdadm: /dev/sda3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: /dev/sdb3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: /dev/sdc3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: /dev/sdd3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: /dev/sde3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: /dev/sdf3 appears to be part of a raid array:
level=raid10 devices=6 ctime=Tue Sep 28 14:34:19 2010
mdadm: size set to 961980160K
Continue creating array? y
mdadm: array /dev/md4 started.
root at wibert:~# mkfs.xfs -f -d su=65536,sw=6 -l logdev=/dev/md3 -L home /dev/md4
meta-data=/dev/md4 isize=256 agcount=32, agsize=22546416 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=721485120, imaxpct=5
= sunit=16 swidth=96 blks
naming =version 2 bsize=4096 ascii-ci=0
log =/dev/md3 bsize=4096 blocks=32112, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root at wibert:~# mdadm --detail /dev/md4
/dev/md4:
Version : 00.90
Creation Time : Thu Sep 30 14:27:19 2010
Raid Level : raid10
Array Size : 2885940480 (2752.25 GiB 2955.20 GB)
Used Dev Size : 961980160 (917.42 GiB 985.07 GB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Fri Oct 1 09:21:19 2010
State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : near=1, far=2
Chunk Size : 64K
UUID : 4e420fd5:e9f8a4ff:fcc517c0:fe041f9d (local to host wibert)
Events : 0.38
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
6 8 147 - spare /dev/sdj3
When did non-default chunk size last work on raid10?
http://blog.jamponi.net/2008/07/raid56-and-10-benchmarks-on-26255_10.html
--
raid10 fails - "lost page write due to I/O error on md4" and " raid10_make_request bug: can't convert block across chunks or bigger than 128k 1623343324 20" - write fails, remote nfs mount of filesystem becomes unusable
https://bugs.launchpad.net/bugs/651846
You received this bug notification because you are a member of Kernel
Bugs, which is subscribed to linux in ubuntu.
More information about the kernel-bugs
mailing list