Memory constraints and lxc containers

Edward Hope-Morley edward.hope-morley at canonical.com
Fri Feb 27 11:00:27 UTC 2015


Maybe this can help (thanks to Serge):

The simplest way is to use

        cgm getvalue memory ''  memory.usage_in_bytes

as that only requires cgmanager on the host and in the container.

You can also use lxcfs.  You can use it on trusty, but it requires having
ppa:ubuntu-lxc/daily.  You'd do

sudo add-apt-repository ppa:ubuntu-lxc/daily
sudo apt-get update
sudo apt-get install lxcfs

Now lxcfs will run and mount a filesystem under /var/lib/lxcfs.  The lxc
from that same ppa will (I assume) automatically hook newly created
containers to use lxcfs.  For any pre-existing containers, you can just
add

lxc.include = /usr/share/lxc/config/common.conf.d

to /var/lib/lxc/<container/config

That should make it so when you start the container, /proc/meminfo inside
the container is virtualized.

On 27/02/15 09:27, Stuart Bishop wrote:
> Hi.
>
> I've seen several bug reports and workarounds for charms that need to
> tune memory settings, which tends to fail horribly when using the
> local provider or deploying to lxc containers. It seems to be
> impossible to infer how much RAM a service should be using. The end
> result is extra configuration items override inferred values, and
> non-lxc specific code paths.
>
> I'm not sure what the solution should be. Are there suitable container
> constraints that could be passed to the charm, and charms make their
> decisions based on the constraints rather than using the global system
> values and hoping there is only a single container on the system?
> Should the lxc containers be setup with limited resources and be
> reporting that, instead of the system values?
>
> Memory is the one I've seen tripped over several times. I also lxc
> specific code paths for disabling swap and messing with sysctl
> settings, but these are less common and don't cause your system to
> grind to a halt when you are testing a charm locally and your desktop
> gets swapped out.
>




More information about the Juju mailing list