up and running - Raid 1 in HPE Gen10

Robert Moskowitz rgm at htt-consult.com
Fri Jun 20 13:02:15 UTC 2025


I can't log into my ILO , see below...

On 6/20/25 2:10 AM, Jon LaBadie wrote:
> On Thu, Jun 19, 2025 at 12:31:38PM -0400, Robert Moskowitz via 
> ubuntu-users wrote:
>> I kept it simple.
>>
>> I had to delete all the drive setup stuff from the failed install. I
>> know how to do that.  I have done it enough in years past with Centos
>> and Fedora!
>>
>> I set up the Internal USB as boot.  Now this is a bit of a challenge, as
>> 2 USB show, both 16GB.  But one shows "in use", so I assumed that was
>> the Ubuntu install drive.  I set up the other for /boot
>
> 2+ years ago I was setting up my HP MicroServer Gen10 Plus.
> Not certain what OS I was going to use I set up a Ventoy USB stick
> with several "live" distros, RHEL, Fedora, Mint, Ubuntu, ...
>
> First observation, not every external facing USB port could be used to
> boot from the Ventoy stick.  Nicely the two front panel ports worked.
>
> Second, I tested using the internal port, but not just for a boot
> partition but for the OS/distro installation.  I forget whether I
> used a 32 or 64GB stick, but I removed all drives and installed
> Ubuntu 22:04 directly to the internal stick.

I have only the boot partition on the Internal USB slot.  It was easy in 
the Ubuntu installer to select it.

I will set up a dd to a file on the main partition for a back up. No, 
probably to an external HD...

>
> I ran it that way for a week or more while learning more about
> Ubuntu.
>
> I don't recall if the Gen 10 model had iLO (integrated Lights Out
> management), built-in or optional but on the Gen 10 Plus model it
> was a $50 add on.

I paid $100 for mine from Amazon.  I got an IiLO5 in sealed factory 
packaging.

I have a Gen 10 plus.

> If you have the iLO you should be able to send
> it instructions to boot the system from another system. 

If I could log into it.

 From the server setup, I set the iLO to a static addr, changed the 
hostname and added another userID with all permissions.

I can access the login web page no problems, but I cannot log in. :(

Not from the ID I added or the built-in Administrator ID.  For the 
Administrator password I tried both the S/N from the label on the sealed 
factory package (that I had to break to get to the card) and the barcode 
S/N that is also on the box.  I even tried the hostname originally 
assigned to the card.

Can't log in.  I am using Firefox on Fedora 41.  I even tried putting my 
notebook on the same addressing subnet as the iLO.

Can't get in.

> Alternatively you could activate Wake-On-LAN in the bios and boot the 
> computer
> on schedule from another system.

I am looking at a power strip with a timer for turning it on at 1am for 
the backups.  I can also turn it on manually if I need it during the 
day.  It pulls ~50W; that adds up over a year when normally it is only 
needed for a couple hours every night.

>
> The Gen 10 Plus iLO module plugs into the single PCI slot but has
> a PCI slot of its own.  I used that slot to install a card that
> accepted 2 NvME sticks. 

NvME?  I am going to have to look that up.

> One was for the system and one for /home.
> Thus I did not continue to use the internal USB stick for the OS.
> But I did not want to use any of the 4 spinning disks for OS,
> preferring to reserve them for backup storage.

With 2 4TB RAID 1 groups (I need help configuring the second, separate 
note on that), I don't see the few GB for the OS to be an issue.  I plan 
on running RSYNC as a service, and want to add SAMBA for a standalone 
server.  I am undecided on cloud software.

>
> I second the earlier suggestion to use md, software raid.  As was
> noted, the data would still be available even if you reinstalled
> the OS or even a different distro.

It was easy to set up with Ubuntu 24.  Everything now seems to be rolled 
into the installer.

But I need to learn which physical drive = what drive name.  So in the 
event I need to replace one, I know which to pull!  My QNAP SMB server 
clearly indicates which one is faulting.  I know, as I had to replace one...

I got a gutter to clean out before the next rainstorm in a couple hours, 
then back at it.  Plus 2 zboxnanos are arriving today for Ubuntu install 
to replace the last 2 armv7 running Centos7 for the past 8 years...  
These boxen only pull 8w; the same as my Cubieboard2 systems; rather 
impressive.  More memory, more processor, etc; and I found some AD65 in 
PA for $32.

>
> jl
>>
>> I then grouped the first 2 listed drives into a Raid1 group.
>>
>> I then formated this whole 4TB group as EXT4, no LVM setup.
>>
>> I did not select any packages to add and let the install proceed.  I
>> stayed watching the screen and finally the install finished. The reboot
>> hung, I had to power off and pull the install drive.
>>
>> Powered up and eventually got to login as me.
>>
>> Did an apt upgrade and rebooted again.
>>
>> All ready, by some definition of ready to use!
>>
>> So it seems mdadm is now part of the installer.  But I kept it simple
>> and will work with the 2nd set of drives later.
>>
>> Of course it would be nice to know, physically, which drive is which...
>>
>> I have to figure out why the HP ILO login is failing.  I did see one
>> note that the system accessing it is to be on the same network as the
>> ILO, I am not.  So I will have to set things up that way and see where
>> it takes me.  I mean, I get the ILO login screen, but cannot log in.
>>
>> On 6/19/25 11:04 AM, Robert Moskowitz via ubuntu-users wrote:
>>> Install failed.  I probably tried too much.
>>>
>>> Ubuntu 24 seems to give everything needed for software Raid install.
>>> But I went and set up the 2nd pair of drives as a 2nd Raid group and
>>> set that up with an LVM partition.  My bad.
>>>
>>> This time, I think I have it "figured" out how to set up just the
>>> first 2 drives as a raid group.  Set up an LVM on it, then EXT4 for /.
>>>
>>> "Later", I can set up another partition in the LVM for all the backups
>>> I will be rsyncing over at nights.  Then setup the 2nd pair of drives.
>>>
>>> Perhaps.  Here goes.
>>>
>>> On 6/19/25 8:20 AM, Robert Moskowitz via ubuntu-users wrote:
>>>>
>>>>
>>>> On 6/19/25 7:48 AM, Sam Varshavchik wrote:
>>>>> Robert Moskowitz via ubuntu-users writes:
>>>>>
>>>>>> I have an HPE Gen 10 with 4 4TB drives that has been sitting for 2
>>>>>> years for me to figure out how to get RAID working.
>>>>>>
>>>>>> Now that I am working with Ubuntu, it almost makes sense, but I am
>>>>>> not there.  Yet.
>>>>>>
>>>>>> Seems I have to go into custom setup for the drives.
>>>>>>
>>>>>> So far I have only taken 2 drives into a RAID1 config and the other
>>>>>> two into an LVM
>>>>>>
>>>>>> Once I selected Raid and put the two drives in it, I was only
>>>>>> offered formatting as EXT4 for /
>>>>>>
>>>>>> I am being told I need a boot partition.  Of course.
>>>>>>
>>>>>> So is there any decent guide for this?
>>>>>
>>>>> It is certainly possible because I have done this exact same thing,
>>>>> but using mdraid rather than any hardware-based RAID. A lot of ink
>>>>> has been spilled about this, over the years, but the capsule summary
>>>>> is that mdraid over the long term will fare better. You can pull the
>>>>> disks and drop them into another box and it'll just work, for
>>>>> example. You can't do this with hardware RAID, without also
>>>>> installing identical hardware, too. And if your hardware RAID card
>>>>> gives up the magic smoke you're SOL, unti you can find an identical
>>>>> replacement.
>>>>
>>>> I like the sound of this.  I have a long history of pulling a drive
>>>> out of one box and placing it another and getting on with life. Also
>>>> allowed me to upgrade to better processor/more memory without
>>>> rebuilding the drive.
>>>>
>>>>>
>>>>> I basically followed this:
>>>>>
>>>>> https://askubuntu.com/questions/1299978/
>>>>
>>>> Maybe this will explain opening up terminal...
>>>>
>>>>>
>>>>> but skipped some of the fluff up front. The capsule summary is:
>>>>>
>>>>> 1) Boot the installer, open a terminal shell
>>>>>
>>>>> 2) apt update, apt install mdadm
>>>>
>>>> where in the installer do you get the option to open a shell?
>>>>
>>>>
>>>> and it seems that the Ubuntu 24 is offering me some tools for setting
>>>> up the RAID.
>>>>
>>>>>
>>>>> 3) Use fdisk (or sgdisk) to partition both drives, then use mdadm to
>>>>> assemble them into RAID arrays.
>>>>
>>>> How long has it been since I used fdisk for this?  I long ago
>>>> switched to parted.  But even that I have to read my crib notes.
>>>>
>>>>> This is where I wish I've done something different than the guide,
>>>>> which basically tells you that the EFI boot partition is SOL, as far
>>>>> as mdraid goes, and gives you marching orders to just create a
>>>>> non-RAID partition on both disks, use the one on the boot drive for
>>>>> the UEFI partition, set up some automation to dd it to the other
>>>>> drive's partition, and use efibootmgr to include both disks as
>>>>> bootable devices.
>>>>>
>>>>> Since then, I've learned that it should be possible to use mdraid
>>>>> for the EFI boot partition by formatting it as mdraid 1.0 instead of
>>>>> mdraid 1.1 (still need to fiddle with efibootmgr). It just so
>>>>> happens that this is exactly the situation on my other box running
>>>>> Fedora, which has no issues with the efi boot partition on mdraid
>>>>> (Fedora's installer directly supported installation to mdraid for a
>>>>> very long time, at least a decade). I'll try that next time.
>>>>
>>>> This box, and many of its ilk has an internal slot for USB or SD
>>>> device where you can place the boot loader.  I figure that after each
>>>> new kernel I can dd that partition to a file on the RAID HD for safe
>>>> keeping.
>>>>
>>>>>
>>>>> 3) Start the ubuntu installer. It'll be slightly confused and list
>>>>> both the physical partitions and the RAID partition together, as
>>>>> installation targets. Be sure to select the right partitions
>>>>> (disclaimer, this was the story in Ubuntu 20, current experience
>>>>> might vary).
>>>>
>>>> Thank you for sharing your experiences.  I will definitely check out
>>>> that guide.
>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>




More information about the ubuntu-users mailing list