[Mongrel] Memory leaks in my site

Kirk Haines wyhaines at gmail.com
Wed Mar 7 06:14:57 EST 2007


On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:
> On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:

> Looks like we are on the same stage of the learning curve about this stuff.
> So, let's share our discoveries with the rest of the world :)
>
> > httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test
> --num-call 1 --num-conn 10000
> >
> > The memory usage of the mongrel server grows from 20M to 144M in 20
> seconds, it's
>
> This is exactly what Mongrel does when it cannot cope with the incoming
> traffic. I've discovered the same effect today.

I think it's fair to make a distinction here.

What is probably happening is that Rails is not keeping up with that
80 reqs/second rate; it's not Mongrel.  On any remotely modern
hardware, Mongrel will easily keep that pace itself.

However, the net effect is that Mongrel creates a thread for each
accepted connection.  These threads are fairly memory intensive, since
each one carries with it a fair amount of context, yet all they are
doing is sitting there sleeping behind a mutex, waiting for their
chance to wake up and run their request through the Rails handler.

> By the way, check the errors section of httperf report, and the
> production.log. See if there are "fd_unavailable" socket errors in the
> former, and probably some complaints about "too many files open" in the
> latter. If there are, you need to either increase the number of file
> descriptors in the Linux kernel, or decrease the max number of open sockets
> in the Mongrel(s), with -n option. I don't know if it solves the "RAM
> footprint growing to 150 Mb" problem... I will know it first thing tomorrow
> morning :)

No.  That is probably happening because of the file descriptor limit
in Ruby.  Your Mongrel has accepted as many connections as Ruby can
handle; it is out of descriptors.


Kirk Haines


More information about the Mongrel-users mailing list