[Mongrel] Workaround found for request queuing vs. num_processors, accept/close

Robert Mela rob at robmela.com
Mon Oct 15 16:52:27 EDT 2007

I've discovered a setting in mod_proxy_balancer that prevents the 
Mongrel/Rails request queuing vs. accept/close problem from ever being 

For each BalancerMember

  - max=1  -- this caps the maximum number of connections Apache will 
open a BalancerMember to '1'
  - acquire=N  max amount of time (N seconds) to wait to acquire a 
connection to a balancer member

So, at a minimum:

   BalancerMember http://foo max=1 acquire=1

and I'm using

BalancerMember max=1 keepalive=on acquire=1 timeout=1


I experimented with three mongrel servers, and tied one up for 60 
seconds at a time calling "sleep" in a handler.

Without the "acquire" parameter mod_proxy_balancer's  simple round-robin 
scheme blocked waiting when it reached a busy BalancerManager, 
effectively queuing the request.  With "acquire" set the balancer 
stepped over the busy BalancerMember and continue searching through it's 
round-robin cycle.

So, whether or not Mongrel's accept/close and request queuing are 
issues, there is a setting in mod_proxy_balancer that prevents either 
problem from being triggered.

At a bare minimum, for a single-threaded process running in Mongrel

   BalancerMember max=1 acquire=1
   BalancerMember max=1 acquire=1

With all BalancerMembers busy Apache returns a 503 Server Busy, which is 
a heck of a lot more appropriate than 502 proxy error.


It turns out that having Mongrel reap threads before calling accept both 
queueing in Mongrel and prevents Mongrel's accept/close behavior.

But BalancerMembers in mod_proxy_balancer will still need "acquire" to 
be set -- otherwise proxy client threads will sit around waiting for 
Mongrel to call accept -- effectively queuing requests in Apache.

Since max=1 acquire=1 steps around the queuing problem altogether, the 
reap-before-accept fix, though more correct, is of no practical benefit.


With the current Mongrel code,  BalancerMember max > 1 and Mongrel 
num_processors > 1 triggers the  accept/close bug.

Likewise, BalancerMember max >1 with Mongrel num_processors > 1 runs 
into Mongrel's request queuing....


Conclusion ---

I'd like to see Mongrel return a 503 Server Busy when an incoming 
request hits the num_processor limit.

For practical use, the fix to the problems is in configuring 
mod_proxy_balancer such that it shields against encountering either issue.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: rob.vcf
Type: text/x-vcard
Size: 116 bytes
Desc: not available
Url : http://rubyforge.org/pipermail/mongrel-users/attachments/20071015/b4553556/attachment.vcf 

More information about the Mongrel-users mailing list