[Mongrel] Memory leaks in my site

Zed A. Shaw zedshaw at zedshaw.com
Thu Mar 8 13:27:58 EST 2007

On Wed, 7 Mar 2007 23:35:36 -0700
"Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:

> Some further findings:
> "Hello World" Rails application on my rather humble rig (Dell D620 laptop
> running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2)  can handle over 500 hits
> per second on the following action:
> def say_hi
>   render :text => 'Hi!'
> end
> It also doesn't leak any memory at all *when it is not overloaded*. E.g.,
> under maximum non-concurent load (single-threaded test client that fires the
> next request immediately upon receiving a response to the previous one), it
> stays up forever.

Sigh, I'll test this too, but the solution is to set -n to something
reasonable for your setup.

> What I was thinking is that by uncoupling the request from its thread, you
> can probably max out all capabilities (CPU, I/O, Rails) of a 4 cores
> commodity box with only 15-30 threads. 10-20 request handlers (that will
> either serve static stuff or carry the request to the Rails queue), one
> rails handler that loops over requests queue, takes requests to Rails and
> drops responses off in the response queue, 5-10 response handlers (whose job
> is simply to copy Rails response from responses queue to originating
> sockets).
> Right now, as far as I understand the code, request is carried all the way
> through by the same thread.
> On second thoughts, this is asynchronous communications between threads
> within the process. Far too much design and maintenance overhead for the
> marginal benefits it may (or may not) bring. Basically, just me being stupid
> by trying to be too smart. :)

Exactly, I went through this design too using various queueing
mechanisms and Ruby's Thread primitives were just too damn slow.  The
best way to get max performance was to start a thread which handled the
request and response.  The fastest (by a small margin) was not using
threads at all but instead using a select loop.  Problem with that is
then if your Rails code starts a Thread and doesn't do it right then
Ruby's idiotic deadlock detection kicks in because it considers the
select calls part of the deadlock detection.

Now that fastthread is out though, it might be worth checking out the
queueing model to see if it's still slow as hell or not.  Ultimately I
wanted a single thread that listened for connections and built the
HttpRequest/Response objects using select, then fire off these to a
queue of N numbers of processor threads.  Queue was just too damn slow
to pull it off so it didn't work out.

> > Until Ruby's IO, GC, and threads improve drastically you'll keep hitting
> > these problems.
> Yes. Meantime, the recipe apparently is "serve static stuff through an
> upstream web server, and use smaller values of --num--procs". Mongrel that
> only receives dynamic requests is, essentialy, a single-threaded process,
> anyway. The only reason to have more than one (1) thread is so that other
> requests can queue up while it's doing something that takes time. Cool.

Not really, if you set Mongrel to handle only -n 1 then your web server
will randomly kill off mongrels and connections from clients whenever
you run out of backends to service requests.  The nginx author is
currently working on a mechanism to allow you to queue the requests at
the proxy server before sending them back.

Also, -n 1 will not work for all the other Ruby web frameworks that
don't have this locking problem.  All of the other frameworks are
thread safe (even ones that use AR) and can run multiple requests

> By the way, is Ruby 1.9 solving all of these issues?

No idea, considering 1.9 is decades out at it's current pace.  You
should go look at JRuby if you want something modern that's able to run
Rails right now (and Mongrel).

> > No one who is sane is trying to really run a Rails app on a 64 meg VPS - -
> thats just asking for a lot of pain.
> Well, entry-level slices on most Rails VPS services are 64 Mb.
> My poking around so far seems to say "it's doable, but you need to tune it".

No, people need to quit thinking that this will work the way it did
when they dumped their crappy PHP code into a directory and prayed
Apache would run it.  Even in those situations that ease of deployment
and ability to run on small installations was an illusion.  Talk to
anyone who does serious PHP hosting and they'll tell you it gets much
more complicated.

Sorry to be so harsh, but as the saying goes, you can have it cheap,
fast, or reliable pick one. (Yes, one, I'm changing it. :-)

However, why are people are complaining about 64M of ram for a Mongrel
process?  C'mon, Java processes typically hit the 600M or even
2G ranges and that's just common place.  If you want small scale
hosting, you'll have to try a different solution entirely.

Even better, why are people complaining about the memory footprint of
RAILS on the MONGREL mailing list?  These same problems existed before
Mongrel, and when you complain here there's nothing I can really do.
You want the RAM to go down in Rails, then start writing the patches to
get it to go down.  I'm sure there's just oodles of savings to be made
inside Rails.

Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu
http://www.awprofessional.com/title/0321483502 -- The Mongrel Book

More information about the Mongrel-users mailing list