logoalt Hacker News

Origin of the rule that swap size should be 2x of the physical memory

43 pointsby SeenNotHeardyesterday at 11:09 PM41 commentsview on HN

Comments

dirk94018today at 12:40 AM

Early BSD VM pre-allocated swap backing for every anonymous page — you couldn't allocate virtual memory without a swap slot reserved for it, even if the page was never paged out.

When a process forks, the child needed swap reservations for the parent's entire address space (before exec replaces it). A large process forking temporarily needs double its swap allocation. If your working set is roughly equal to physical RAM, fork alone gets you to 2x.

This was the practical bottleneck people actually hit. Your system had enough RAM, swap wasn't full, but fork() failed because there wasn't enough contiguous swap to reserve. 2x was the number that made fork() stop failing on a reasonably loaded system.

The later overcommit/copy-on-write changes made this less relevant, but the rule of thumb outlived the technical reason. Most people repeating "2x RAM" today are running systems where anonymous pages aren't swap-backed until actually paged out.

Today swap is no longer about extending your address space, it's about giving the kernel room to page out cold anonymous pages so that RAM can be used for disk cache.

A little swap makes the system faster even when you're nowhere near running out of memory, because the kernel can evict pages it hasn't touched in hours and use that RAM for hot file data instead.

The exception is hibernation — you need swap >= RAM for that, which is why Ubuntu's recommendations are higher than RedHat's 20% of RAM.

show 2 replies
petcattoday at 12:18 AM

The OP clearly states that he wants to know the earliest origin of the rule, and the only answers he gets are people giving their own opinions on how much swap space you should have.

Too bad because it's an interesting question that I would also like to know the answer to.

show 1 reply
Bendertoday at 12:54 AM

Managed over 50k servers with zero swap. Set overcommit ratio to 0, min_free configured based on a Redhat formula and had application teams keep some memory free. Adjust oom scores at application startup especially for database servers where panic is set to 0.

Servers ranged from 144GB ram to 3TB ram and that memory is heavily utilized. On servers meant to be stateless app and web servers panic was set to 2 to reboot on oom which mostly occurred in the performance team that were constantly load testing hardware and apps and a few dev machines were developers were not sharing nicely. Engineered correctly OOM will be very rare and this only gets better with time as applications have more controls over memory allocation and other tools like namespaces/cgroups. Java will always leak, just leave more room for it.

antongriboktoday at 2:06 AM

I think more people should know about the existence of ZRAM on modern Linux distributions. It's really changed the way I look at swap configs.

ZRAM is a compressed block device that is stored in RAM. It's great!

Previously, if I ever had high memory pressure situations, I really dreaded the slowdowns. Now, with swap sitting on top of /dev/zram0 it's a completely different experience.

I have ZRAM enabled on all of my personal machines, both laptops with limited memory, and desktops with 64 or 128GB of RAM. It's rarely used, but it is nice to have that extra room sometimes.

The performance of a zram device is so much faster than even the latest NVMe drives.

Sohcahtoa82today at 12:45 AM

None of the answers are satisfying to me, tbh.

I install more RAM so I can swap less. If I have 8 GB, then the 2x rule means I should have a 16 GB swap file, giving me 24 GB of total memory to work with. If I then stumble upon a good deal on RAM and upgrade to 32 GB, then if I never had memory problems with 24 GB, then I should be able to completely disable paging and not have a problem. But instead, the advice would be to increase my paging file to 64 GB!?

It doesn't make any sense. At all.

show 1 reply
bandramitoday at 12:45 AM

I've had arguments with people about this for 20 years now and the most compelling case I heard involved the price of storage vs the price of RAM in the mid to late 1990s, and that this 2x represented an optimal use of money in designing a system at that point in time.

show 1 reply
LowLevelKerneltoday at 12:16 AM

Curious, How much swap have you personally allocated on your personal setup?

show 7 replies
lyu07282today at 1:40 AM

in 1997 people talking about it on the Slackware Usenet group:

    >Question: Why do you need 500MB of swap space? You would be better of
    >spending your money on more RAM than wasting it on so much swap space,
    >considering that it would most likely never be used anyways.

    I work with systems that have between 256MB and 1GB of RAM and
    between 4GB and 16GB available for Linux. My experience with other
    operating systems is that swap should be 2X to 3X RAM

    ...

    The info that I have read about Linux is that the 2x for swap space is
    only for those running less than 16mb of ram. Your swap space could be
    equal to your ram

    ...

    I know there are broken OSes out there where it's recomended to
    have 2x RAM swapspace, but Linux is not broken in that way.
    With Linux you should have <Max needed memory> - <RAM> swapspace,
    and depending on your needs that might range from 0 to infinity
    MBs of swap.

    ...

    THIS IS CRAZY!!!! YOU DON'T KNOW WHAT THE F--K YOU'RE TALKING ABOUT.
It goes downhill from there..

https://groups.google.com/g/alt.os.linux.slackware/c/hWy0h_S...

show 1 reply
xen2xen1today at 12:31 AM

It's old enough that I'd put money on DEC. Any takers on that.