[Mongrel] Design flaw? - num_processors, accept/close
alexey.verkhovsky at gmail.com
Tue Oct 16 02:02:16 EDT 2007
On 10/15/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote:
>Tried Lucas Carlson's Dr. Proxy yet? Other solutions? Evented mongrel?
HAProxy (and some other proxies smarter than mod_proxy_balancer)
solves this problem by allowing to set the maximum number of requests
outstanding to any node in the cluster. Setting it to 1 means that it
will only ask a Mongrel instance to serve a request when it's not
already doing so. Which makes perfect sense with Rails
(single-threaded), especially if you do have something else to serve
static content in this setup.
Setting num_processors to 1 is only possible when you have a proxy
that can restrict itself from sending more than one request per
Mongrel. Otherwise, if I remember correctly, you replace occasional
delays with HTTP 503s. Not a good trade-off.
Setting num_processors low has a positive side effect of restricting
how far your Mongrel will grow in memory when put under strain even
for a short period. It grows in memory by allocating RAM to new
threads (that then pile up on a Rails mutex). With, say, 10 Mongrels
and a default num_processors = 1024, allocating memory for 1024 * 10
threads means hundreds of Megabytes of RAM.
I usually set num_processors to something a bit bigger than 1 (say,
5), just so that monitoring can hit it at the same time when load
More information about the Mongrel-users