[Mongrel] Mongrel Tuneup Guide: Questions

Zed Shaw zedshaw at zedshaw.com
Tue Sep 5 18:05:00 EDT 2006

On Tue, 2006-09-05 at 16:55 +0530, Vishnu Gopal wrote:
> Since the list has been silent on this, I'll add some more of my
> observations to this post.
> I redid the test using my dev Mac mini as a guide. It's a dual core
> machine, and here's what the results looked like:
> The numbers again are the 10sec --nconns and the final --rate
> 1 Mongrel serving a test /test (w/o db access, very simple page):
> 1600 160
> 4 Mongrels serving a /test:
> 1400 190
> 1 Mongrel serving a full /index:
> 200 18
> 2 Mongrels serving a full /index:
> 190 26

Hmm, I'd question either how you measured or how you have things
configured.  Take a look at your system stats, try moving mysql out,
change it up a bit and see what the optimal configuration is with just 2
mongrels.  I'm suspecting there's something else going on since those
results are contrary to what I've seen.

Now, what you've done is correct and it's showing you the limits of your
configuration.  First, the best you can do is 160 req/sec.  Second, your
typical page will get around 26 req/sec.  Now, whether this is enough
for you depends on your application.

One question though, did you run httperf on the same machine as the one
you're testing?  httperf will use the CPU pretty heavily when it's doing
a good test, so you'll need to use a laptop connected to the same
switch/router as your server.  Also, MySQL is a memory hog, which
impacts Mongrel quite a bit.

> The lesson learnt is that CPU loads become a factor very quickly with
> mongrels and lots of hits on the server, esp if the CPU is paltry.
Yep, this also shows that you've got an index page you can make faster,
but that it might be cheaper for you to just throw hardware at it (as
you mention below).

> I'm not sure what kind of configuration all the rest of the tests I've
> found on the net have been run on [aah here is one:
> http://blog.kovyrin.net/2006/08/28/ruby-performance-results/ run on a
> 4x xeon with 4g ram :-)] but imho you still need a fast (much faster)
> web server even on a single node to run rails/mongrel as compared to
> php. Going the php way with very cheap machines is probably not going
> to work.
One of the reasons I tell people to do this kind of test is for the
simple reason that buying the same hardware as Ezra doesn't mean you get
the same performance he does.  He's got a different application,
different code, probably has his database on a different server, and he
might have some tricks you don't.

What you need to do now is go tinkering with your configuration to see
if you can get it to go faster.  I think now you're at the stage where
the numbers are showing your configuration isn't that great, but you
need to take the next step and say, "Ok, I'm a moron, I should have this
configured differently."

Having these numbers then makes your task easier.  You know the best
that one machine can get is 160 req/sec, so you'll need at least 2.  You
know what your performance is for /index, so you can make changes and
continue measuring to see if these changes help.  Without these numbers
you'd be just guessing.

> The question I'm most interested in, and which I'd really like an
> answer to is that I decided on Mongrel because of the HTTP stack used
> throughout. I could basically have a load balancer hitting mongrels on
> multiple machines... very flexible stuff and not possible with the
> traditional fastcgi model.
> If I do buy more (how many?) cheap machines serving 7req/s each, and
> then load balance all of em (say with hardware), could I realistically
> hope to hit comfy loads like 300 req/s? And will this continue to
> scale?
Ok, yes, you can do this, but where did 300 req/s come from?  Did you
just make that up?  Is it based on user surveys of what they think of
the page speed?  Is it necessary for all your pages?

The point of scalability is to start small then *scale* up.  Not to go
crazy buying billions of dollars worth of equipment to hit some made-up
vanity number.

Sit back for a bit and do an analysis of what the req/s of these pages
actually needs to be in order to meet service goals.  Then go in and try
other simple tricks to see if you can make them marginally faster.  For
example, unless your index page changes drastically for every request
you should be using page caching.  If your index page *must* change for
every request, then consider redesigning it so that this isn't a

Stefan Kaes has lots of really good recommendations on things you can do
to tune your application.  Go read his stuff.

But, base this tuning and your effort and expenditures on a realistic,
reasonable and measurable goal.

> Or should I just buy two 4x xeons and be done with it?

First, double check that you did the analysis right, check your
configuration and try to make it faster, then yes buy a *little* bit of
hardware to run another test and see if that improves.  Don't spend a
million dollars on a solution that might not work.

Which brings me to another point:  Maybe Mongrel isn't for you.  There's
a bunch of contenders right now, and sometimes Mongrel's simplicity and
flexibility doesn't work for people.  Other folks run pretty big fast
sites on Mongrel, but they understand how to tune Rails.  If you're not
in this camp then go check out some of the other Rails deployment
options.  First up would be Litespeed.  It's commercial software and
you'd pay per CPU, but if it gives you your 300 req/sec vanity number
then spend the money there.

Hope that helps.

Zed A. Shaw
http://www.lingr.com/room/3yXhqKbfPy8 -- Come get help.

More information about the Mongrel-users mailing list