[Mongrel] Memory leaks in my site

Alexey Verkhovsky alexey.verkhovsky at gmail.com
Wed Mar 7 01:55:06 EST 2007

On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:

Looks like we are on the same stage of the learning curve about this stuff.
So, let's share our discoveries with the rest of the world :)

httperf --server --port 3000 --rate 80 --uri /my_test --num-call
> 1 --num-conn 10000
> The memory usage of the mongrel server grows from 20M to 144M in 20
> seconds, it's

This is exactly what Mongrel does when it cannot cope with the incoming
traffic. I've discovered the same effect today.

You are definitely overloading it with 80 requests per second. After all,
it's a single-threaded instance of a fairly CPU-heavy framework. With no
page caching it should cope with ~10 to 30 requests per second max.

The crappy part about this, after the overload condition is off, the Mongrel
process stays at 150Mb. Not a problem when you are hosting one app on the
box, but becomes a problem when it's ten.

By the way, check the errors section of httperf report, and the
production.log. See if there are "fd_unavailable" socket errors in the
former, and probably some complaints about "too many files open" in the
latter. If there are, you need to either increase the number of file
descriptors in the Linux kernel, or decrease the max number of open sockets
in the Mongrel(s), with -n option. I don't know if it solves the "RAM
footprint growing to 150 Mb" problem... I will know it first thing tomorrow
morning :)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/ea2b19f4/attachment.html 

More information about the Mongrel-users mailing list