Is this possible?
Bob
ubuntu-qygzanxc at listemail.net
Thu Oct 6 09:09:51 UTC 2016
** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
20:27:08 -0400
> On Wed, Oct 5, 2016 at 4:24 PM, Bob <ubuntu-qygzanxc at listemail.net> wrote:
> > ** Reply to message from Peter Silva <peter at bsqt.homeip.net> on Wed, 5 Oct 2016
> > 07:46:09 -0400
> >
> >> "swap is maxed out"
> >>
> >> uh... if that's true, it doesn't matter how much or what kind of cpu
> >> you have, it will crawl and die from time to time. your machine is
> >> sitting in wait i/o.
> >
> > true
> >
> >
> >> When "swap is maxed out" the kernel will kill
> >> processes randomly, (OOM Killer) you cannot expect a PC with it's
> >> memory (including swap space) entirely full to run correctly.
> >
> > If this is what Linux does that is a very bad design. I would never have
> > thought the system would do that.
> >
>
> OK, I said randomly, I meant that loosely, in terms of the user being
> unlikely to understand what is being killed or why. An explanation
> of the algorithm is here: https://linux-mm.org/OOM_Killer
Thanks for the link.
> It isn't bad design, it's completely normal and a logical consequence
> of an aggressively modern virtual memory system. Detailed
> explanation here:
>
> http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html
I still think it is a bad design.
I come from many many years of mainframe experience and am fairly new to Linux
but I still think it is a bad design.
> Example, you start up two processes (same executable) they start out the same,
> so they share memory, then a process needs to write to a memory page,
> so that page cannot be shared anymore, OS needs to allocate a new
> page. Nobody malloc'd anything, and it was way faster to start up if
> you don't copy everything just because two processes are using them.
> copy-on-write...
>
> Example, when you do a malloc, and the value isn't initialized, it may
> just succeed (as long as the memory would fit into process and/or
> virtual memory limits.) when the process actually writes to it,
> ahh... then you need it to really exist, but if you don't actually
> have the memory (and/or swap) available... (b)OOM.
>
> People are better off not overloading their systems, and never
> encountering OOM, but Linux is actually as smart as possible given a
> really poor situation.
I agree that overloading a system is bad but it seems many people here advocate
setting swap to zero or memory size. I am used to a large swap size to allow
for peak memory usage and that is how I set up my system. My current swap size
is 4 times my memory size and consider that a bit small. I have never tracked
max swap usage so I don't know what it has been but current swap usage is 60mb.
I have some long running number cruncher programs but limit the number running
to the number of cores. I have not noticed any performance problems using
several CLI and/or GUI programs while everything else is running.
--
Robert Blair
The inherent vice of capitalism is the unequal sharing of the blessings. The inherent blessing of socialism is the equal sharing of misery. -- Winston Churchill
More information about the ubuntu-users
mailing list