Modular Application Updates: Libvirt and QEMU
Corey Bryant
corey.bryant at canonical.com
Mon Apr 3 17:00:17 UTC 2017
On Tue, Mar 28, 2017 at 4:26 AM, Christian Ehrhardt <
christian.ehrhardt at canonical.com> wrote:
> On Mon, Mar 27, 2017 at 9:13 PM, Dmitrii Shcherbakov <
> dmitrii.shcherbakov at canonical.com> wrote:
> >
> >
> > TL;DR: Putting libvirt and QEMU into the same snap removes an ability
> > to update them independently and only use new QEMU binaries after VM
> > shutdown.
> >
>
> [...]
>
> 7 If a QEMU process is terminated via SIGTERM or SIGKILL, the guest
> > kernel page cache and buffer cache will not be dropped which will
> > highly likely cause a file system corruption.
> >
>
> There are hooks you can link in yourself for upgrades IIRC.
> That could be used to at least gracefully shut them down - but I agree that
> there should be no reason to do so at all.
> The qemu's should continue to run through and after the update.
>
> [...]
>
>
> > The idea with any 'classic' package management system (for debs, rpms
> > etc.) is as follows:
> >
> > 1 Updates move new files over the old ones. That is, shared objects
> > and binaries are unlinked but not overwritten - if there is still a
> > process that has a file open (or mmaped which requires a file to be
> > open) an old inode and the related data on a file system is kept until
> > the reference count is zero;
> >
> > 2 Running programs can use old binaries and shared objects which they
> > have open until restart
> >
>
> [...]
>
>
> > 1 A new squashfs and an old squash fs are obviously different file
> > systems - hence inodes refer to different file systems;
> >
> > 2 All processes are killed during an update unconditionally and the
> > new file system is used to run new processes;
> >
>
> Yeah for server side things with a longer lifecycle that doesn't seem
> right.
>
>
> > 3 Some libraries are taken from the core snap's file system which
> > remains the same (but may change as the core snap may have been
> > updated before while a particular snap used an old version of it).
> >
>
> In some sense the squashfs entry points can be considered your new inode.
> All new application starts should be from the new version, but any old
> program could continue to run from the old content.
> That would be true for core-snap and application snap - only once all old
> refs are gone the old version can "really" go away.
>
> So think of an upgrade:
> PRE: content in /snap/app/oldver/foo
> UPGRAD adds: /snap/app/newver/foo
> UPGRADE changes: /snap/app/current is set to newver
> But /snap/app/oldver/foo would stay around and running applications kept
> alive.
> Only once the last one is gone /snap/app/oldver would completely vanish.
>
>
I like this suggestion. It sounds similar to this, which seems to be how
traditional libvirt/qemu packages work:
http://unix.stackexchange.com/questions/74142/why-does-a-software-package-run-just-fine-even-when-it-is-being-upgraded
Dmitrii, it sound likes without any support like that above, any update to
the qemu/libvirt snap would require all VMs to be shut down during the
update. Is that right? If so that's going to be a tough situation.
Corey
> IIRC we keep it around anyway to be able to roll back right [1]?
> We already make sure nothing new is started from old.
> Maybe it is just a way more advanced garbage collection and a change to the
> update behavior to leave things alive.
>
>
>
> Also on the "killing on update" as a major change in behavior - and to take
> it a bit out of the qemu/libvirt example.
> If I watch a movie on one screen in my snap vlc and in another console
> would refresh all snaps on my system I'd in no way expect it to kill my
> running vid.
> I haven't tried, but according to the report here that is what would happen
> right?
> I'd expect it to continue to run and whenever I start a new vlc next time
> it will be the new upgraded one.
> BTW - I realized that "snap remove vlc" leaves it alive, so refresh being
> harder than remove to a running app seems even more wrong.
>
>
> Thanks for sharing your thoughts Dimitri, I think you uncovered an
> important and interesting part of the snap implications which is worth
> to be thought through in detail.
> I hope the snap'xperts can jump in with a deeper view onto all this -
> especially on all the snap vs snap_data considerations that have to go into
> this when running from old+new at the same time would become allowed.
>
> [1]: https://developer.ubuntu.com/en/snappy/guides/garbage/
> --
> Snapcraft mailing list
> Snapcraft at lists.snapcraft.io
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/snapcraft
>
More information about the Snapcraft
mailing list