Help with an LVM2 server?
Blaine Horrocks
ubuntublaine at gmail.com
Tue Jul 3 13:58:28 UTC 2007
Gord,
It looks like you had a total of 745.23G of storage in your three
original drives and that all of the storage is used for your existing
filesystems. Your pvr filesystem uses 513G of that and I suspect
the remaining 232G is allocated to your other filesystems (root,
boot, etc) that aren't listed in your email. LVM has no available
unallocated storage. (Free PE is zero).
There are essentially four steps to extending a lvm filesystem, plus
some mental juggling and getting to know the tools.
I'll take a guess that you are spanning the three disks using LVM and
that they are not in a raid config. You can check this by doing a
pvdisplay. If they are in some type of raid config you will get
something like this (with md devices):
root at yoru:~# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name vg_system
PV Size 38.34 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 9816
Free PE 7768
Allocated PE 2048
PV UUID aJIBYM-rTGD-x9xs-tewY-QmIW-Kdgj-zNRkgs
--- Physical volume ---
PV Name /dev/md0
VG Name data1
PV Size 894.27 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 228932
Free PE 62532
Allocated PE 166400
PV UUID 5IAqPv-Tu99-wmCW-awJ0-MZyP-0ZIL-J3BrKf
If you are in a raid config (mdadm etc) then you need to do a little
more reading because what you do depends on what raid level you are
running. As you can't see from this snippet, md1 is in raid 0 and
md0 is in raid5. :D
I'll assume that you just have raw drives, with no raid. Which is
ok, but be warned that you will be splitting your filesystems across
drives and that increases your failure risk when you have a drive
failure. It could kill multiple filesystems. :(
To add a new drive to a lvm set you basically need to do the
following sequence of operations. Read the man pages for each
command, I'm doing this mainly from memory:
Add the new physical drive to the lvm pool (pvcreate /dev/sda1 )
Add the pv to a volume group (vgextend archive /dev/sda1 )
This should show up in vgdisplay as a bunch of new Free PE (physical
extents)
Then you can add PE to logical drives by using lvextend
For instance, using my md0 example above, its the "drive" in my data1
volume group and has multiple filesystems on it (pardon the poor
naming):
root at yoru:~# ls -al /dev/data1/
total 0
drwx------ 2 root root 180 2007-07-02 23:06 .
drwxr-xr-x 16 root root 14140 2007-07-02 23:04 ..
lrwxrwxrwx 1 root root 28 2007-07-02 23:06 lv_backups -> /dev/
mapper/data1-lv_backups
lrwxrwxrwx 1 root root 25 2007-07-02 23:06 lv_home -> /dev/mapper/
data1-lv_home
lrwxrwxrwx 1 root root 29 2007-07-02 23:06 lv_software -> /dev/
mapper/data1-lv_software
lrwxrwxrwx 1 root root 26 2007-07-02 23:06 lv_video -> /dev/
mapper/data1-lv_video
lrwxrwxrwx 1 root root 24 2007-07-02 23:06 media1 -> /dev/mapper/
data1-media1
lrwxrwxrwx 1 root root 23 2007-07-02 23:06 music -> /dev/mapper/
data1-music
lrwxrwxrwx 1 root root 27 2007-07-02 23:06 vmsystems -> /dev/
mapper/data1-vmsystems
As you can see, I like to have lots of filesystems. It makes backup
easier.
To extend a logical drive you would then do something like:
lvextend -L NewTotalLogicalDriveSize /dev/data1/lv_video
lvextend can in fact be used to pick which physical volume to take
the new extents (bits of logical drive) from rather than having LVM
just grab it from the available PE in the LV's volume group (storage
pool).
Keeping some spare PE allows you to bail when you run out of drive
space. Like yesterday when i did an Edgy to Feisty upgrade, ran
out of space in var and extended the root file system from the volume
group pool.
Lastly...
You have to tell the actual file system that it needs to use the new
free space. For jfs this happens when it is mounted. Other fs
types are different or cannot be extended. rtfm, ymmv, etc. etc.
mount -o remount,resize=NewJFSBlockCount /home/pvr
NewJFSBlockCount is the total available storage (e.g. 513G +
NewAllocatedStorage) DIVIDED BY the JFS BLOCK SIZE. The jfs block
size defaults to 4k, but might be smaller if you did some funky tuning.
Hope that helps.
Blaine
1-Jul-07, at 14:49 , G Mc.Pherson wrote:
> Hi Folks,
>
> I recently upgraded my server here at home. This server currently
> has 3
> hard drives operating under LVM2 and formatted as JFS and I'm
> trying to
> add a 4th drive (/dev/sda1). The problem is that the howto at
> http://www.tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html talks about
> ext2fs.
>
>
> After fooling around, I managed to get vgdisplay to report the
> following:
>
> --(cut here)--
>
> --- Volume group ---
> VG Name archive
> System ID
> Format lvm2
> Metadata Areas 4
> Metadata Sequence No 28
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 1
> Open LV 1
> Max PV 0
> Cur PV 4
> Act PV 4
> VG Size 745.23 GB
> PE Size 4.00 MB
> Total PE 190778
> Alloc PE / Size 190778 / 745.23 GB
> Free PE / Size 0 / 0
> VG UUID dTVTyh-4P0j-aKmx-M8WW-eQNY-VKon-OMrGkY
>
> --(cut here)--
>
> Note that the VG Size is reporting 745.23GB, however when I
> issue a 'df -h', I get the following report:
>
> --(cut here)--
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/pvr
> 513G 288G 225G 57% /home/pvr
> root at vsrv:/home/pvr#
> --(cut here)--
>
> The question is, it appears that I'm missing 232GB, which sounds
> reasonable as the additional drive is a 250G?
>
> Can anyone help?
>
> Gord
>
> --
> ubuntu-ca mailing list
> ubuntu-ca at lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-ca
More information about the ubuntu-ca
mailing list