[Bug 1654777] Re: zram-config control by maximum amount of RAM usage

John Moser john.r.moser at gmail.com
Fri Aug 4 04:29:34 UTC 2017


That still talks about on-disk swap.  This doesn't create a swap file or
swap partition; it creates a swap area in RAM.  In general, there is no
reason to have any sort of swap area on disk, save for scientific
applications where you have 100 times as much working set as you have
physical RAM.

In the context of the debian installer, I don't think you should rely on
on-disk swap in any case.  Creating a 1-2GB swap file when you have,
e.g., 64MB of RAM is patently ridiculous:  if you need that much more of
a working set, you're never going to finish installation; you're just
going to swap thrash for 2 or 3 years while the system tries to figure
out how to operate the installer but is too busy operating kswapd.

The argument that having a swap file available makes RAM scheduling
more-efficient is also one of rapidly-diminishing returns as RAM size
grows:  the entire installed system is like 4GB, the installer doesn't
eat much memory, and most block cache won't get reused, so the system
will likely stale out things at maximum efficiency even with 1GB of RAM.

zswap still requires a backing device (swap file or partition).  zram
doesn't.  That should be brought up in the next discussion on the topic
methinks.

In any case, there's already a zram-config package, and this script is a
replacement for the one in the current release.  Whether or not the
installer switches to this or we install this by default or whatever is
a secondary thought, but a consideration I wanted to raise.  The longer
discussions on that are probably off-topic in this particular bug.

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to zram-config in Ubuntu.
https://bugs.launchpad.net/bugs/1654777

Title:
  zram-config control by maximum amount of RAM usage

Status in zram-config package in Ubuntu:
  New

Bug description:
  This is a request for comment regarding adjusting zram-config to limit
  memory consumption, rather than to limit amount of memory to be
  swapped.

  Under the current script (in 16.10), about 1/2 of RAM can be swapped
  to zram.  This may consume 1/6 of RAM space or 1/4 of RAM space (3:1
  and 2:1 compression, respectively), for example, depending on actual
  compression performance.

  This modification instead says each zram device can use up a portion
  of RAM.  Instead of specifying 1/2 RAM as the swap area, it specifies
  100%.  On a machine with 8GB of RAM, for example, it will expose 8GB
  of swap; and it will limit zram to consuming 4GB of real RAM to store
  the compressed data.

  Because zram generally gets 3:1 to 4:1, it would be more-realistic to
  create 1.5-2 times the zswap space.  For example, the 8GB system would
  have 12G of swap space, but only use up to 4G of memory for it.  At
  3:1 compression, that would use all of the zram swap space.

  Essentially, the actual size of the device limits how much
  uncompressed RAM you can swap out; while the mem_limit limits the
  amount of RAM the zram device can use, including the compressed data
  and the control data.

  
  Further Discussion:

  Note that, in my experience, zram is fast.  I've run a 1GB server with
  this script and gone 350MB into zram swap, with 40MB of available
  memory, just by starting up a Gitlab docker container and logging in:

  Tasks: 184 total,   1 running, 183 sleeping,   0 stopped,   0 zombie
  %Cpu(s):  6.0 us,  2.6 sy,  0.0 ni, 90.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.3 st
  KiB Mem :  1016220 total,    76588 free,   826252 used,   113380 buff/cache
  KiB Swap:  1016216 total,   666920 free,   349296 used.    45324 avail Mem

  This got up as far as 700MB of Swap used in just a few clicks through
  the application.  It still returned commit diffs and behaved as if it
  was running on ample RAM--I did this during a migration from a system
  with 32GB RAM, over 10GB of which is disk cache.

  As such, I see no problem essentially doubling the amount of reachable
  RAM--and I do exactly that on 1GB and 2GB servers running large
  working sets, with active working sets larger than physical RAM space,
  and with vm.swappiness set to 100.

  Note that swapping 5:1 even if you are getting 5:1 ratios would
  involve a lot of swapping and thus a lot of LZO computation.  For this
  reason, more-than-double might be unwise in a generic sense.  A
  doubling of RAM is using 50% of RAM space to store 1.5x the swap--so
  1.5GB zram devices with 0.5GB mem_limit and at least 3:1 compression
  average.

  The script I have provided allocates at most 50% to store at most
  100%.  It is likely to represent a multiplication of working space by
  1.6.

  I have provided the entire script, rather than a diff, as a diff is
  about the same size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zram-config/+bug/1654777/+subscriptions



More information about the foundations-bugs mailing list