Setting up my 2nd set of Raid drives - HPE Gen10 Plus
Robert Moskowitz
rgm at htt-consult.com
Tue Jun 24 18:49:12 UTC 2025
On 6/23/25 11:48 PM, Robert Moskowitz via ubuntu-users wrote:
> With google's AI I have come up with the following:
>
> sgdisk -Z /dev/md2
> sgdisk -n 1:0:0 -t 1:8300 -c 1:"RAID_Storage2" /dev/md2
I needed a reboot here...
> mkfs.ext4 /dev/md2p1
> mkdir /Storage2
> mount /dev/md2p1 /Storage2
> blkid /dev/md2p1
> UUID=<your_raid_uuid> /Storage2 ext4 defaults 0 2
/Storage2 is available
Good enough.
crontab shutdown at 4:10am worked last night.
Next since the system will only be on from ~1am - 4am, I need to change
the time for auto apt updating. Found info on that.
>
> I LOOKS right, but some extra eyes, please,,,,
>
> Thanks
>
> On 6/23/25 7:28 PM, Robert Moskowitz via ubuntu-users wrote:
>> This is for my HPE Gen10 Plus.
>>
>> In the Ubuntu 24 install, I put the boot drive on the Internal USB
>> stick, as you will see below.
>>
>> I set up two Raid groups, messed up and restarted Install. Only this
>> time just creating one Raid group (md0) and setting it up as one big
>> ext4 partition.
>>
>> Now I want to setup the remaining 2 drives as an ext4 partition as
>> directory /storage2
>>
>> The only guide I have so far is:
>>
>> https://askubuntu.com/questions/1299978/install-ubuntu-20-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios
>>
>>
>> df -h shows:
>>
>> Filesystem Size Used Avail Use% Mounted on
>> tmpfs 780M 1.3M 778M 1% /run
>> efivarfs 494K 125K 365K 26% /sys/firmware/efi/efivars
>> /dev/md0 3.6T 393G 3.1T 12% /
>> tmpfs 3.9G 0 3.9G 0% /dev/shm
>> tmpfs 5.0M 0 5.0M 0% /run/lock
>> /dev/sdf1 688M 6.2M 682M 1% /boot/efi
>> tmpfs 780M 12K 780M 1% /run/user/1000
>>
>> So definitely md0. But I run cat /proc/mdstat:
>>
>> Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
>> md2 : active raid1 sdc[1] sda[0]
>> 3906886464 blocks super 1.2 [2/2] [UU]
>> bitmap: 0/30 pages [0KB], 65536KB chunk
>>
>> md0 : active raid1 sdb[0] sdd[1]
>> 3906886464 blocks super 1.2 [2/2] [UU]
>> bitmap: 0/30 pages [0KB], 65536KB chunk
>>
>> And somehow, my aborted effort to make that 2nd Raid group stayed in
>> the install. Or so it seems.
>>
>> But to further confuse me:
>>
>> # lsblk
>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
>> sda 8:0 0 3.6T 0 disk
>> └─md2 9:2 0 3.6T 0 raid1
>> └─vg0-lv--0 252:0 0 3.6T 0 lvm
>> sdb 8:16 0 3.6T 0 disk
>> └─md0 9:0 0 3.6T 0 raid1 /
>> sdc 8:32 0 3.6T 0 disk
>> └─md2 9:2 0 3.6T 0 raid1
>> └─vg0-lv--0 252:0 0 3.6T 0 lvm
>> sdd 8:48 0 3.6T 0 disk
>> └─md0 9:0 0 3.6T 0 raid1 /
>> sde 8:64 0 2.7T 0 disk
>> └─sde1 8:65 0 2.7T 0 part
>> sdf 8:80 1 14.5G 0 disk
>> └─sdf1 8:81 1 689M 0 part /boot/efi
>>
>> I did choose lvm in that aborted install for the 2nd Raid group, and
>> it seems like that is still there?
>>
>> I don't think I need lvm, I am open to why I might. But can someone
>> point me to a guide that will get md2 usable as /storage2?
>>
>> My googling for such a guide has come up empty...
>>
>> Thanks!
>>
>> Oh, and I will need to figure out which physical drives are which sd_
>> so if I need to, I will know which to fix.
>>
>> And I have rsyncd up on this box ready to start receiving backups.
>> But no poweron hardware timer, it seems, in the Gen10+, so I will
>> need a powerstrip with a timer to turn it on at 1am and a cron job to
>> shutdown at 4am...
>>
>>
>>
>>
>
>
More information about the ubuntu-users
mailing list