Is there a way to reduce the memory footprint of Ubuntu 9
Basically, my question is, Is the default install a bare bones instance? What/how
to reduce it if any at all.
TIA !
10 Replies
FWIW, I've been impressed with memory usage - for whatever reason, each Apache instance seems to use ~10MB less memory on Ubuntu vs. CentOS. This is based on numbers from top, which may not be exact but give a rough estimate (shared memory for each instance is comparable on both systems).
If you're running MySQL, there are tweaks that can be done to it to save memory if needed.
@jjeffus:
Is there a way to reduce the default memory footprint of Ubuntu 9.04?
Why yes! Install Debian 5.0:D
250 MB base install, plus a bit of apt-pinning on the side if needs be…
Edit: if you mean that new fangled random access brouhaha: carefully tune your my.cnf (see /usr/share/doc/mysql-server-5.0/examples), php-fastcgi threads, replace Apache with Nginx/Lighttpd/etc., and use munin to see how it all performs over time.
@mjrich:
@jjeffus:Is there a way to reduce the default memory footprint of Ubuntu 9.04?
Why yes! Install Debian 5.0:D 250 MB base install, plus a bit of apt-pinning on the side if needs be…
Edit: if you mean that new fangled random access brouhaha: carefully tune your my.cnf (see /usr/share/doc/mysql-server-5.0/examples), php-fastcgi threads, replace Apache with Nginx/Lighttpd/etc., and use munin to see how it all performs over time.
I just tried Nginx for the first time over the last 24hours. Of course this is far from a production server, but it is way smaller than Apache. Wow! I've been using Apache for years, it is interesting to find that there are now several serious contenders for an httpd.
> tried Nginx for the first time over the last 24hours. Of course this is far from a production server, but it is way smaller than Apache. Wow!
Yep, I'd have to be dragged kicking and screaming back to Apache.
For example, look at top on my linode and tell me how much memory you think apache is using?:
top - 22:26:10 up 67 days, 11:55, 1 user, load average: 0.05, 0.02, 0.00
Tasks: 86 total, 1 running, 85 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 368856k total, 323480k used, 45376k free, 15620k buffers
Swap: 524280k total, 3896k used, 520384k free, 177268k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17402 www-data 16 0 146m 61m 54m S 0 17.0 0:32.83 apache2
19889 www-data 16 0 146m 60m 53m S 0 16.8 0:26.64 apache2
18368 www-data 16 0 146m 60m 52m S 0 16.7 0:31.21 apache2
21134 www-data 16 0 146m 58m 51m S 0 16.4 0:17.69 apache2
25082 www-data 16 0 146m 54m 47m S 0 15.2 0:04.42 apache2
2715 mysql 15 0 132m 52m 5140 S 0 14.5 196:11.57 mysqld
24524 www-data 19 0 144m 45m 39m S 0 12.5 0:05.30 apache2
1537 root 18 0 142m 7836 4600 S 0 2.1 0:19.02 apache2
61+60+60+58+52=291MB. But wait, I'm on a Linode 360, and 45MB is reported as free, 15MB is used for file buffers and 177MB is used by disk cache, on top of the 52MB used by mySQL. Clearly something doesn't add up
Assuming this script
Back to the original question though, I have no idea why apache looks larger on CentOS, it could be that the default options to top are a little different, so it is actually measuring memory differently. It cold be that you are comparing 32-bit Ubuntu to 64-bit CentOS, or it could be that more modules are loaded on CentOS, or that they use different compiler optimizations, or…
Apache is still a huge memory hog. I used to have a site which would get big bursts of downloads of large files (hundreds of megs). The disk should have been able to handle it, since people only downloaded one file at a time, and it should have fit entirely in RAM.
However, Apache would say "Oh, many clients, I should spawn a ton of processes to serve these", consume all available RAM leaving none for the disk, which would then thrash trying to handle all the read requests with no cache.
The box would then more or less hang.
I switched out the server for lighttpd, which handled all the extra requests without really using any extra RAM (it's a single-process server), and suddenly my performance was fine; there was plenty of RAM free for disk caching.
In any case, my point stands, adding up the RES size of each Apache process, as I've seen done here and elsewhere (and I've done myself) dramatically overestimates apache memory usage.
I agree completely that Apache is a poor choice for serving large files. I went in search of a better alternative 4 years ago when I was preparing to help a friend release his band's 3rd album as a free 50-70MB download and chose an early version of lighttpd at the time.
That doesn't make lighttpd or nginx the best choice for serving dynamic content though, particularly for PHP apps. PHP deployment on Apache is widely understood and pretty bomb-proof (Just make sure modphp and modrewrite are enabled and dump your php files in the web root). I don't think the same can be said for deploying under fcgi with either lighttpd or nginx.
What's awesome though is that you can have the best of both worlds. It is really simple to set nginx up as a reverse proxy. Once in place, it quickly buffers the responses it gets from Apache, freeing apache to move on to the next request while nginx deals with feeding the data back to the client. This allows a much smaller pool of apache processes to handle a given rate of new requests, even with static files. With a little more work though, you can have nginx serve static files directly though I'm not sure how much more that actually improves things. At the same time, you can draw on the large amount of information and expertise around configuring apache.
You don't waste memory loading PHP into processes that serve static content like Apache does (mod_php means all Apache processes have PHP loaded up), and can configure the number of PHP processes as appropriate to your load.
While deploying PHP with lighttpd might not be widely understood, it's simple enough that you don't even need to edit a config file. You need to execute two commands (three if you count the server reload). It can even be executed on a single line.
Pretty much every PHP app is targeted for deployment on Apache first, and fast-cgi etc second, if at all. Setting nginx (or something similar) up as a reverse proxy to apache, even without configuring it to serve static files directly, lets you retain much of the simplicity of apache deployment while getting most of the resource consumption advantages of cutting apache out entirely. The waste of having apache involved in serving static files is mitigated considerably by having nginx buffer them, which frees apache and its memory use free to quickly move on to another task.
(nginx reverse-proxy to apache isn't perfect, it has a few quirks. I'm still not sure the right way to use phpmyadmin as packaged for Ubuntu behind an nginx reverse proxy. I get redirect weirdness.)
@Guspaz:
However, Apache would say "Oh, many clients, I should spawn a ton of processes to serve these", consume all available RAM leaving none for the disk, which would then thrash trying to handle all the read requests with no cache.
You're doing it wrong.
For a workload like that, you want to use Apache's mpmworker instead of the default mpmprefork, which should never be used for static sites these days. mpm_worker is not as efficient as async servers like lighttpd, but it's a huge improvement over prefork.