Is this possible?

rikona rikona at sonic.net
Wed Oct 5 20:07:40 UTC 2016


Hello Peter,

Wednesday, October 5, 2016, 4:46:09 AM, Peter wrote:

> "swap is maxed out"

> uh... if that's true, it doesn't matter how much or what kind of cpu
> you have, it will crawl and die from time to time. your machine is
> sitting in wait i/o. When "swap is maxed out" the kernel will kill
> processes randomly, (OOM Killer) you cannot expect a PC with it's
> memory (including swap space) entirely full to run correctly. One
> would routinely expect crashes as the processes that get killed
> might be important.

Thanks for the explanation of what's going on. It was helpful.

> Here is a healthy amount of swap use:

> KiB Mem : 11733384 total,  2327220 free,  3294000 used,  6112164 buff/cache
> KiB Swap: 24761340 total, 24761340 free,        0 used.  7137152 avail Mem

Agreed.

> You should be aiming for all your work to fit in memory. Swap is
> only efficient when it is used to swap out bits of memory that are
> not being used by running processes (say initialization code, or
> code that isn't used often.) If your machine is going to swap often,
> well the swapping in from disk is 100x to 1000x slower than memory,
> so it's going to be slow.

I realize that. I have a couple of thoughts re that problem. Most
folks are suggesting that with, say, 32G mem, you don't need swap at
all. I upgraded my old box to 16G [the max] for the recent IO-bound
test, and left swap at 8G. It looks like one of the processes wants to
fill up all available memory, but I don't know how to check that out.
The result was it was still using swap, perhaps for the few other pgms
I was running. Is there a way to limit the amount of mem that a
process uses?

If, for some reason, I am running into swap even with more memory,
would there be any disadvantage to make the swap space even larger? I
know it would be slow, but might that help re crashes?

Another thought is to use a small SSD for swap. Small ones are not
that expensive, and even if they have a shorter life, might that help
speed up the swap?

The old box has dual cores. I also noticed that one pgm became
unresponsive when one of the cores maxed out, even though swap was not
full. htop even showed 100+% which I didn't know was possible,
assuming the number is correct. I killed that process and it fixed the
slow down. Swap was being used at the time - is it possible that the
simultaneous 100+% and swapping is causing the crashes? Seeing this
was one of the reasons I thought more cores might help, although I'm
not sure the process would use more than one core if it was available.

> You either need more memory, or you need to run less stuff at once.
> There are ways of doing that in an organized way (batch queueing
> systems), but it might be more straight forward just to put some
> sequencing in your work. That is you don't just fire up many
> background tasks at once, but rather a few at a time, planning them
> out so that all running tasks always fit in memory.

Agreed. I'd like to do a LOT of stuff at the same time. :-) I am
considering going to an Intel processor and ddr4 to [eventually] get
more memory. Would you recommend going with the Z170 chip, or is that
too new to be supported by Ubuntu/linux [UEFI, USB3.1, esata, etc]?

> use top to look at your memory usage and keep swap down...  It doesn't
> have to be zero, but it will never work if it is 'maxed out'.

Understood. Thanks VERY much for the explanations and help!!

-- 

 rikona

> On Wed, Oct 5, 2016 at 4:52 AM, Liam Proven <lproven at gmail.com> wrote:
>> On 3 October 2016 at 16:35, rikona <rikona at sonic.net> wrote:
>>>
>>> My hope is to do things in parallel. I work with a fair amount of
>>> data, and a large data run may take 12 hours. Sometimes I can split
>>> that and run it as multiple processes. While that is running, I may
>>> have multiple browsers, each with perhaps 50 or more tabs open. This
>>> load makes my current box unusably slow, swap is maxed out, and
>>> something very often crashes - I may lose several days of work. And
>>> there's email, editing of docs, making diagrams, etc, etc. Perhaps
>>> Intel can do one 12 hour job in 4 hours, but I still have lots going
>>> on during that 4 hours.
>>
>>
>> I'm ignoring all the pointless advocacy here.
>>
>> If you have stability issues, you need to troubleshoot them properly.
>>
>> You need to profile your workloads and find the bottlenecks.
>>
>> And if it's background stuff and concerns with OSes struggling to
>> balance conflicting workloads then you should probably be looking at
>> VM solutions, and partitioning off the background  number-crunching
>> tasks.
>>
>> Throwing CPU cores at the problem is inane and a pointless waste of
>> cash. Throwing slower cores is burning banknotes. And throwing slower
>> cores *when you're not even sure it's CPU-bound* is just stupid.
>>
>> Sorry, but it is.
>> --
>> Liam Proven • Profile: http://lproven.livejournal.com/profile
>> Email: lproven at cix.co.uk • GMail/Twitter/Facebook/Flickr: lproven
>> Skype/MSN: lproven at hotmail.com • LinkedIn/AIM/Yahoo: liamproven
>> Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)
>>
>> --
>> ubuntu-users mailing list
>> ubuntu-users at lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users







More information about the ubuntu-users mailing list