[Bug 1726818] Re: vagrant artful64 box filesystem too small
Scott Moser
ssmoser2+ubuntu at gmail.com
Tue Oct 24 15:48:12 UTC 2017
The attached /var/log/cloud-init.log has 2 boots in it.
One starts at 14:15:18 (line 1) and one starting at 14:21:17,808 (line 719).
The first boot successfully updated the partition table for /dev/sda
so that the first partition (/dev/sda1) took the whole ~ 10G disk.
2017-10-24 14:15:33,338 - cc_growpart.py[INFO]: '/' resized: changed (/dev/sda, 1) from 2359296000 to 10736352768
then cloud-init runs 'resize2fs /dev/sda1' and gets which exited with
exit code 1 and prints the following to stderr.
resize2fs: Remote I/O error While checking for on-line resizing support
The second boot finds that there is no need for growpart to run (NOCHANGE)
But it stil calls 'resize2fs /dev/sda1', and that fails the same way.
So, good news is this looks like it is recreatable.
I suspect you can see the same error just by running:
sudo resize2fs /dev/sda1
Bad news is that I'm not sure what could be going wrong.
Could you collect a 'dmesg' output?
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
** Also affects: e2fsprogs (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1726818
Title:
vagrant artful64 box filesystem too small
Status in cloud-images:
New
Status in e2fsprogs package in Ubuntu:
New
Status in linux package in Ubuntu:
Incomplete
Bug description:
After building a new vagrant instance using the ubuntu/artful64 box
(v20171023.1.0), the size of the filesystem seems to be much too
small. Here's the output of `df -h` on the newly built instance:
vagrant at ubuntu-artful:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 991M 0 991M 0% /dev
tmpfs 200M 3.2M 197M 2% /run
/dev/sda1 2.2G 2.1G 85M 97% /
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
vagrant 210G 182G 28G 87% /vagrant
tmpfs 200M 0 200M 0% /run/user/1000
For comparison, here is the same from the latest zesty64 box:
ubuntu at ubuntu-zesty:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 992M 0 992M 0% /dev
tmpfs 200M 3.2M 197M 2% /run
/dev/sda1 9.7G 2.5G 7.3G 26% /
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
vagrant 210G 183G 28G 88% /vagrant
tmpfs 200M 0 200M 0% /run/user/1000
With artful64, the size of /dev/sda1 is reported as 2.2G, which results in 97% of disk usage immediately after building, even though the disk size is 10G, as reported by the fdisk:
vagrant at ubuntu-artful:~$ sudo fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4ad77c39
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 20971486 20969439 10G 83 Linux
Disk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Almost any additional installation results in a "No space left on device" error.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1726818/+subscriptions
More information about the foundations-bugs
mailing list