[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting

Kirk Haines wyhaines at gmail.com
Sun Mar 4 10:12:59 EST 2007


On 3/2/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:
> Dear all,
>
> I am researching solutions for "how do you squeeze as many Rails apps as you
> can on a cluster" problem.
>
> Environment constraints are as follows:
>
> * 4 commodity web servers (2 CPUs, 8 Gb of RAM each)
> * shared file storage and database (big, fast, not a bottleneck)
> * multiple Rails apps running on it
> * normally, the load is insignificant, but from time to time any of these
> apps can have a big, unpredictable spike in load, that takes (say) 8
> Mongrels to handle.
>
> The bottleneck, apparently, is RAM. At 100 Mb per Mongrel process, you can
> only put 320 Mongrel processes on those boxes, and under specified
> parameters you can only handle 40 apps on the hardware described above. PHP
> can handle thousands of sites under the same set of constraints.

Do you have a sense for how many requests/second capacity represents a
rate that can survive slashdotting?  i.e. you are assuming 8
mongrels/app.  So what capacity do those 8 mongrels represent?

> If anybody knows a load balancer smart enough to start and kill multiple
> processes across the entire cluster, based on demand per application, please
> let me know about that, too.

I have developed a clustering proxy from scratch (the first fledgling
release of it should be today) that is in a good position for feature
requests.  Having a single load balancer starting and stopping
processes across a cluster, though, calls for quite a bit of
complexity.  A compromise, that would just manage backends on the same
machine that the proxy is running on, could have potential.  Something
would proxy requests out to nodes, and the local proxy on each node
would manage it's pack of backend processes.  The friction of having
two layers of proxying might create too much throughput entropy,
through.

> Finally, I've been thinking about making Rails execution within Mongrel
> concurrent by spawning multiple Rails processes as children of Mongrel, and
> talking to them through local pipes (just like FastCGI does, but a
> Ruby-specific solution). This may allow a single Mongrel to scale 3-4 times
> better than now, and also to scale down if no requests are coming in the
> last, say, 10 minutes. A "blank" Ruby process only takes 7Mb of RAM, perhaps
> a "blank" Mongrel is not much more (haven't checked yet). Wonder if this
> makes sense, or am I just crazy.

This sounds like a way to implement something similar to what I described above.


Kirk Haines


More information about the Mongrel-users mailing list