[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting

Alexey Verkhovsky alexey.verkhovsky at gmail.com
Fri Mar 9 20:49:30 EST 2007


OK, having done some research on the subject, I came to a conclusion that
the main problem with my scenario is that Ruby concurrency sucks, and should
be avoided. Other problems are the total RAM footprint and

Therefore, I'm currently leaning towards the following setup for, say, 20
apps running on the same box:

1. Apache (or possibly nginx) with 20 Virtual Hosts (vhost per app) serving
static content from ./public and redirecting dynamic requests to...

2. HAProxy on the same box, listening on 20 ports (port per app) and
configured to forward requests to 3-5 downstream ports per app, with
maxconns = 1 (which means that the proxy will send only one request at a
time to a downstream port). If all downstream proxies are busy or down,
HAProxy queues the requests internally. It is also smart about not sending
any requests to servers that are down. Below HAProxy, possibly on another
physical box(-es), there are...

3. 3-5 Mongrels per app, with --num-conns=2 (since we are not really sending
them more than one request at a time). This prevents a Mongrel process from
allocating an extra 60 to 100 Mb RAM to itself when it comes under overload.
Not all of these Mongrels need to be running. One or two per app may well be
enough. Two is better, as it prevents a long-running action from holding up
other requests.

When a "slashdotting" occurs, some sort of smart agent (even a human
operator) can start additional Mongrels as needed.

I've created a setup like this on my laptop yesterday, and stress-tested it
in some creative ways for half a day, running Mephisto with page caching
turned off.

Mongrels stayed up through the entire ordeal, at about 48 Mb VSS apiece,
because they were basically never overloaded. The system behaved gracefully
under overload (responding to as many requests as it could, and returning
HTTP 503 to the rest), and did the right things when I killed and restarted
individual Mongrels (seamlessly redirecting traffic to other nodes).

No memory leaks, zombie processes or any other abnormalities observed.

Another thing I discovered is that Ruby (since 1.8.5) can be told to commit
suicide if it needs to allocate more than a certain amount of RAM.
Process.setrlimit(Process::RLIMIT_AS) is the magical word. This is better
than harvests oversized processes with a cron job, because a greedy process
dies before the memory is actually allocated, so other processes on the same
OS remain unaffected.

Am I on the right track? What other issues should I be testing for /
thinking about?

Best regards,
Alex Verkhovsky
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070309/6d73226a/attachment.html 


More information about the Mongrel-users mailing list