up and running - Raid 1 in HPE Gen10

Jon LaBadie ubu at labadie.us
Fri Jun 20 06:10:21 UTC 2025


On Thu, Jun 19, 2025 at 12:31:38PM -0400, Robert Moskowitz via ubuntu-users wrote:
>I kept it simple.
>
>I had to delete all the drive setup stuff from the failed install. I
>know how to do that.  I have done it enough in years past with Centos
>and Fedora!
>
>I set up the Internal USB as boot.  Now this is a bit of a challenge, as
>2 USB show, both 16GB.  But one shows "in use", so I assumed that was
>the Ubuntu install drive.  I set up the other for /boot

2+ years ago I was setting up my HP MicroServer Gen10 Plus.
Not certain what OS I was going to use I set up a Ventoy USB stick
with several "live" distros, RHEL, Fedora, Mint, Ubuntu, ...

First observation, not every external facing USB port could be used to
boot from the Ventoy stick.  Nicely the two front panel ports worked.

Second, I tested using the internal port, but not just for a boot
partition but for the OS/distro installation.  I forget whether I
used a 32 or 64GB stick, but I removed all drives and installed
Ubuntu 22:04 directly to the internal stick.

I ran it that way for a week or more while learning more about
Ubuntu.

I don't recall if the Gen 10 model had iLO (integrated Lights Out
management), built-in or optional but on the Gen 10 Plus model it
was a $50 add on.  If you have the iLO you should be able to send
it instructions to boot the system from another system.  Alternatively
you could activate Wake-On-LAN in the bios and boot the computer
on schedule from another system.

The Gen 10 Plus iLO module plugs into the single PCI slot but has
a PCI slot of its own.  I used that slot to install a card that
accepted 2 NvME sticks.  One was for the system and one for /home.
Thus I did not continue to use the internal USB stick for the OS.
But I did not want to use any of the 4 spinning disks for OS,
preferring to reserve them for backup storage.

I second the earlier suggestion to use md, software raid.  As was
noted, the data would still be available even if you reinstalled
the OS or even a different distro.

jl
>
>I then grouped the first 2 listed drives into a Raid1 group.
>
>I then formated this whole 4TB group as EXT4, no LVM setup.
>
>I did not select any packages to add and let the install proceed.  I
>stayed watching the screen and finally the install finished.  The reboot
>hung, I had to power off and pull the install drive.
>
>Powered up and eventually got to login as me.
>
>Did an apt upgrade and rebooted again.
>
>All ready, by some definition of ready to use!
>
>So it seems mdadm is now part of the installer.  But I kept it simple
>and will work with the 2nd set of drives later.
>
>Of course it would be nice to know, physically, which drive is which...
>
>I have to figure out why the HP ILO login is failing.  I did see one
>note that the system accessing it is to be on the same network as the
>ILO, I am not.  So I will have to set things up that way and see where
>it takes me.  I mean, I get the ILO login screen, but cannot log in.
>
>On 6/19/25 11:04 AM, Robert Moskowitz via ubuntu-users wrote:
>> Install failed.  I probably tried too much.
>>
>> Ubuntu 24 seems to give everything needed for software Raid install. 
>> But I went and set up the 2nd pair of drives as a 2nd Raid group and
>> set that up with an LVM partition.  My bad.
>>
>> This time, I think I have it "figured" out how to set up just the
>> first 2 drives as a raid group.  Set up an LVM on it, then EXT4 for /.
>>
>> "Later", I can set up another partition in the LVM for all the backups
>> I will be rsyncing over at nights.  Then setup the 2nd pair of drives.
>>
>> Perhaps.  Here goes.
>>
>> On 6/19/25 8:20 AM, Robert Moskowitz via ubuntu-users wrote:
>>>
>>>
>>> On 6/19/25 7:48 AM, Sam Varshavchik wrote:
>>>> Robert Moskowitz via ubuntu-users writes:
>>>>
>>>>> I have an HPE Gen 10 with 4 4TB drives that has been sitting for 2
>>>>> years for me to figure out how to get RAID working.
>>>>>
>>>>> Now that I am working with Ubuntu, it almost makes sense, but I am
>>>>> not there.  Yet.
>>>>>
>>>>> Seems I have to go into custom setup for the drives.
>>>>>
>>>>> So far I have only taken 2 drives into a RAID1 config and the other
>>>>> two into an LVM
>>>>>
>>>>> Once I selected Raid and put the two drives in it, I was only
>>>>> offered formatting as EXT4 for /
>>>>>
>>>>> I am being told I need a boot partition.  Of course.
>>>>>
>>>>> So is there any decent guide for this?
>>>>
>>>> It is certainly possible because I have done this exact same thing,
>>>> but using mdraid rather than any hardware-based RAID. A lot of ink
>>>> has been spilled about this, over the years, but the capsule summary
>>>> is that mdraid over the long term will fare better. You can pull the
>>>> disks and drop them into another box and it'll just work, for
>>>> example. You can't do this with hardware RAID, without also
>>>> installing identical hardware, too. And if your hardware RAID card
>>>> gives up the magic smoke you're SOL, unti you can find an identical
>>>> replacement.
>>>
>>> I like the sound of this.  I have a long history of pulling a drive
>>> out of one box and placing it another and getting on with life. Also
>>> allowed me to upgrade to better processor/more memory without
>>> rebuilding the drive.
>>>
>>>>
>>>> I basically followed this:
>>>>
>>>> https://askubuntu.com/questions/1299978/
>>>
>>> Maybe this will explain opening up terminal...
>>>
>>>>
>>>> but skipped some of the fluff up front. The capsule summary is:
>>>>
>>>> 1) Boot the installer, open a terminal shell
>>>>
>>>> 2) apt update, apt install mdadm
>>>
>>> where in the installer do you get the option to open a shell?
>>>
>>>
>>> and it seems that the Ubuntu 24 is offering me some tools for setting
>>> up the RAID.
>>>
>>>>
>>>> 3) Use fdisk (or sgdisk) to partition both drives, then use mdadm to
>>>> assemble them into RAID arrays.
>>>
>>> How long has it been since I used fdisk for this?  I long ago
>>> switched to parted.  But even that I have to read my crib notes.
>>>
>>>> This is where I wish I've done something different than the guide,
>>>> which basically tells you that the EFI boot partition is SOL, as far
>>>> as mdraid goes, and gives you marching orders to just create a
>>>> non-RAID partition on both disks, use the one on the boot drive for
>>>> the UEFI partition, set up some automation to dd it to the other
>>>> drive's partition, and use efibootmgr to include both disks as
>>>> bootable devices.
>>>>
>>>> Since then, I've learned that it should be possible to use mdraid
>>>> for the EFI boot partition by formatting it as mdraid 1.0 instead of
>>>> mdraid 1.1 (still need to fiddle with efibootmgr). It just so
>>>> happens that this is exactly the situation on my other box running
>>>> Fedora, which has no issues with the efi boot partition on mdraid
>>>> (Fedora's installer directly supported installation to mdraid for a
>>>> very long time, at least a decade). I'll try that next time.
>>>
>>> This box, and many of its ilk has an internal slot for USB or SD
>>> device where you can place the boot loader.  I figure that after each
>>> new kernel I can dd that partition to a file on the RAID HD for safe
>>> keeping.
>>>
>>>>
>>>> 3) Start the ubuntu installer. It'll be slightly confused and list
>>>> both the physical partitions and the RAID partition together, as
>>>> installation targets. Be sure to select the right partitions
>>>> (disclaimer, this was the story in Ubuntu 20, current experience
>>>> might vary).
>>>
>>> Thank you for sharing your experiences.  I will definitely check out
>>> that guide.
>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>

-- 
Jon H. LaBadie                  ubu at labadie.us



More information about the ubuntu-users mailing list