[Bug 1811580] Re: systemd fails to start sshd at reboot
Steve Langasek
steve.langasek at canonical.com
Tue Feb 26 17:05:22 UTC 2019
On Tue, Feb 26, 2019 at 02:39:55PM -0000, Matt P wrote:
> Anyway, root cause seems to be this systemd-tmpfiles error. Tmpfile gets
> purged at reboot and doesn't get recreated.
> Seems pretty major that applying security updates would lock you out of
> your server. If I didn't happen to have a serial console with this
> particular VPS provider (some others I use don't provide one)...I would
> have no idea what was going on.
> I get this might be due to weird openvz image or older kernel...but
> these ubuntu openvz images are very common.
As per
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1811580/comments/14
you must have at least 042stab134.7 installed. Your comment shows that you
have 042sta120.18 installed. You will need to contact your hosting provider
about updating.
Given that an updated kernel exists, we do not intend to reduce security for
all other Ubuntu users on account of hosting providers who are both running
Ubuntu container guests on top of an unsupported non-Ubuntu kernel, *and*
are not keeping their kernel up to date.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1811580
Title:
systemd fails to start sshd at reboot
Status in systemd package in Ubuntu:
Incomplete
Bug description:
So far reported issues turned out to be:
- obsolete/buggy/vulnerable 3rd party provided kernels
- bad permissions on /
Please ensure / is owned by root:root.
Please ensure you are running up to date kernels.
===
Ubuntu 16.04.5, systemd 229-4ubuntu21.15
The latest systemd update has somehow changed the method it uses to
start 'ssh.service' i.e. 'sshd'. systemd fails to start sshd if
/etc/ssh/sshd_config contains "UsePrivilegeSeparation yes" and
/var/run/sshd/ does not already exist. Being as this is the default,
virtually EVERY Ubuntu 16.04 server in the world has
UsePrivilegeSeparation set to yes. Furthermore, at the time when the
user performs 'apt upgrade' and receives the newest version of
systemd, /var/run/sshd/ already exists, so sshd successfully reloads
for as long as the server doesn't get rebooted. BUT, as soon as the
server is rebooted for any reason, /var/run/sshd/ gets cleaned away,
and sshd fails to start, causing the remote user to be completely
locked out of his system. This is a MAJOR issue for millions of VPS
servers worldwide, as they are all about to get locked out of their
servers and potentially lose data. The next reboot is a ticking time
bomb waiting to spring. The bomb can be defused by implicitly setting
'UsePrivilegeSeparation no' in /etc/ssh/sshd_config, however
unsuspecting administrators are bound to be caught out by the
millions. I got caught by it in the middle of setting up a new server
yesterday, and it took a whole day to find the source.
The appropriate fix would be to ensure that systemd can successfully
'start ssh.service' even when 'UsePrivilegeSeparation yes' is set.
systemd needs to test that /var/run/sshd/ exists before starting sshd,
just as the init.d script for sshd does. openssl could also be patched
so that UsePrivilegeSeparation is no longer enabled by default,
however that is not going to solve the problem for millions of pre-
existing config files. Only an update to openssl to force-override
that flag to 'no' would solve the problem. Thus systemd still needs to
be responsible for ensuring that it inits sshd properly by ensuring
that /var/run/sshd/ exists before it sends the 'start' command.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1811580/+subscriptions
More information about the foundations-bugs
mailing list