[Mongrel] multi threaded theoretically useful?
wyhaines at gmail.com
Sat Sep 8 19:49:58 EDT 2007
On 9/8/07, Ashley Moran <work at ashleymoran.me.uk> wrote:
> Like I touched on in my last post, this is the strategy that Erlang
> takes, right? (They seem to have the concurrency problem solved.)
> My immediate interest here is mainly lower memory consumption. The
> server (read - dirt cheap PC) I just bought "only" has 1GB RAM, and
> will only hold 2GB. Simply because I can't afford to go and buy
> anything bigger right now, I'd like to know I don't have to waste RAM
> running a shedload of 50-200MB processes.
Just as a practical example, I run about 60 separate backend ruby
processes on one very modest server with 2GB of RAM, and a bunch more
across a few other servers. These 60+ processes represent a cross
section of communications/request management technique, all backed by
the same basic framework, IOWA.
Some are multithreaded with a mod_ruby based handler for
communications to the backends (these are the oldest ones).
Some use FCGI as the communications conduit, to multithreaded backends.
Some run in Mongrel via a mongrel handler, multi-threaded.
Some use the Mongrel HTTP parser inside an event based application
loop, rendering them effectively single threaded, event based apps.
And some of them run behind Swiftiply, some with one node, and some
with multiple nodes.
On the one machine that has 60 backends on it, each of those
represents a separate site. That is, all of the sites are ran off of
a single process. They all use the same basic framework -- IOWA --
regardless of the communications model being used.
The sites with the greatest throughput capacity are invariably the
ones that run an event based communications model -- either the
mongrel HTTP parser inside an event loop, or behind Swiftiply. I have
sites that, with every page rendered dynamically (from content stored
in a database), are capable of handling 285 requests/second with no
concurrency, up to about 360/second with a concurrency of 50 (measured
using ab). If I switch it to a multithreaded model (mongrel handler),
the best that I can do, with no concurrency, is 220 page
requests/second, and if I have much concurrency at all, that number
quickly drops down towards 150/second.
I have two points, here. First, you can run a considerable number of
ruby based sites on a single modest server like yours, and they can
handle a rather impressive traffic load on a single process, depending
on your framework. And second, running the same code in an event
based pipeline versus a multithreaded pipeline invariably gives me
better peformance in my apps.
More information about the Mongrel-users