[Bug 794963] Re: madam allows growing an array beyond metadata size limitations
slith
794963 at bugs.launchpad.net
Thu Jun 9 14:48:16 UTC 2011
Here is what happened...
I recently upgraded a system running 6 x 2TB HDDs with an EFI
motherboard and 6 x 3TB HDDs. The final step in the process was growing
the RAID-5 array using metadata v0.90 (/dev/md1) consisting of 6
component devices of just under 2TB each to use devices of just under
3TB each. At the time I forgot about the limitation that metadata 0.90
does not support component devices over 2TB. However, the grow completed
successfully and I was using the system just fine for about 2 eeks. LVM2
is using /dev/md1 as a physical volume for volume group radagast and
pvdisplay showed that /dev/md1 has a size of 13.64 TiB. I had been
writing data to it regularly and I believe that I had well exceeded the
original size of the old array (about 9.4TB). All was fine until a few
days ago when I rebooted the system. The system booted back up to a
point, when it could not mount some of the file systems that were on
logical volumes on /dev/md1. So, it seems that the mdadm --grow
operation was successful, but upon boot the mdadm --assemble operation
completed, but not with the same size array as after the grow operation.
Here is some relavant information:
$ sudo pvdisplay /dev/md1
--- Physical volume ---
PV Name /dev/md1
VG Name radagast
PV Size 13.64 TiB / not usable 2.81 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 3576738
Free PE 561570
Allocated PE 3015168
PV UUID 0ay0Ai-jcws-yPAR-DP83-Fha5-LZDO-341dQt
Detail of /dev/md1 after the attempt to reboot. Unfortunately I don't
have any detail of the array prior to the grow or reboot. However, the
pvdisplay above does show the 13.64 TiB size of the array after the grow
operation.
$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Wed May 20 17:19:50 2009
Raid Level : raid5
Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Jun 10 00:35:43 2011
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
UUID : 6650f3f8:19abfca8:e368bf24:bd0fce41
Events : 0.6539960
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 67 2 active sync /dev/sde3
3 8 51 3 active sync /dev/sdd3
4 8 35 4 active sync /dev/sdc3
5 8 83 5 active sync /dev/sdf3
Code:
# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Wed May 20 17:19:50 2009
Raid Level : raid5
Array Size : 3912903680 (3731.64 GiB 4006.81 GB)
Used Dev Size : 782580736 (746.33 GiB 801.36 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Jun 7 02:19:18 2011
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
UUID : 6650f3f8:19abfca8:e368bf24:bd0fce41
Events : 0.6539960
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 67 2 active sync /dev/sde3
3 8 51 3 active sync /dev/sdd3
4 8 35 4 active sync /dev/sdc3
5 8 83 5 active sync /dev/sdf3
And here is the partitioning layout of each drive. /dev/sd[abcdef] have
identical partitioning layouts.
$ sudo parted /dev/sda print
Model: ATA Hitachi HDS72303 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 1066kB 1049kB bios_grub
2 1066kB 207MB 206MB ext3 raid
3 207MB 3001GB 3000GB raid
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to mdadm in Ubuntu.
https://bugs.launchpad.net/bugs/794963
Title:
madam allows growing an array beyond metadata size limitations
Status in “mdadm” package in Ubuntu:
New
Bug description:
Binary package hint: mdadm
It is possible to command mdadm to grow an array such that the array space on a component partition exceeds the maximum size writeable in metadata 0.90 format, which is just over 2TB (4 bytes representing sector size). When told to do this, mdadm appears to do it without error and writes a bogus sector count into the 4 byte container in the super-blocks. Now the system operates with the over-enlarged array without apparent issue but only until the next reboot when the system is told the array size based on the superblock value. User data becomes inaccessible/LVMs don't mount with seemingly no way to recover the inaccessible data.
Obviously, mdadm should refuse to grow the array size beyond the size restriction of its own metadata.
Seen using mdadm - v2.6.7.1 - 15th October 2008 and Ubuntu server 10.04 64-bit
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/794963/+subscriptions
More information about the foundations-bugs
mailing list