[Mongrel] Memory leaks in my site

Zed A. Shaw zedshaw at zedshaw.com
Thu Mar 8 01:26:26 EST 2007

On Wed, 7 Mar 2007 17:38:19 -0700
"Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:

> A more "industrial" solution would be to redesign Mongrel's internal
> architecture a bit. Requests routed to Rails can be placed in a queue, and
> the thread released, instead of being parked at the mutex. It would help
> people hosting their apps in those 64 Mb slices to work without an upstream
> web server, but it would also add complexity to the code. One of those
> tradeoffs.

They are kept in a queue, a couple actually, and they are killed off
when Mongrel can, which is usually when it gets a chance.  If you're
thrashing it then it doesn't have much of a chance.

I'd say first off the solution is:  just quit doing that.  If you're
maxing out your Mongrel servers then you're seriously screwed anyway
and there's nothing but -n to help you.  No amount of additional
queuing will help.  You have to be smarter about it, especially if
you've got a setup that's only 64MB of ram and somehow you're getting
so many requests you can't keep up.  Time to stop being cheap and fork
over the extra money for a bigger slice (which is good because it means
you're popular).

I went through this many times over back in the deep dark days before
fastthread, and no matter what you do if you're piling requests behind
some kind of list--whether that's a mutex or something else--you build
up RAM.  It's as simple as you make threads, threads take ram, threads
don't go away fast enough.

Ultimately though, everyone will just keep cycling over the same old
problems looking for how they can solve it within Mongrel when really
the solution has to come from ruby-core.  Until Ruby's IO, GC, and
threads improve drastically you'll keep hitting these problems.

Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu
http://www.awprofessional.com/title/0321483502 -- The Mongrel Book

More information about the Mongrel-users mailing list