[Mongrel] Workaround found for request queuing vs. num_processors, accept/close

Evan Weaver evan at cloudbur.st
Mon Oct 15 18:04:31 EDT 2007


Very cool. Can you do a little performance testing to see if it's more
efficient under various loads than the current way? I would expect it
would have a small but significant difference when you're near CPU
saturation point, but not much if you're below (enough free resources
already) or above (requests will get piled up regardless). It may be
worse in the overloaded situation because no one's request will get
through--the queue might grow indefinitely instead of getting
truncated.

The 503 behavior seems reasonable.

Evan

On 10/15/07, Robert Mela <rob at robmela.com> wrote:
> Typo -- the following is incorrect:
>
>    With the current Mongrel code,  BalancerMember max > 1 and Mongrel
> num_processors > 1 triggers the  accept/close bug.
>
> should be:
>
>    With the current Mongrel code,  BalancerMember max > 1 and Mongrel
> num_processors = 1 triggers the  accept/close bug.
>
> ====
> Robert Mela wrote:
> > I've discovered a setting in mod_proxy_balancer that prevents the
> > Mongrel/Rails request queuing vs. accept/close problem from ever being
> > reached.
> >
> > For each BalancerMember
> >
> >  - max=1  -- this caps the maximum number of connections Apache will
> > open a BalancerMember to '1'
> >  - acquire=N  max amount of time (N seconds) to wait to acquire a
> > connection to a balancer member
> >
> > So, at a minimum:
> >
> >   BalancerMember http://foo max=1 acquire=1
> >
> > and I'm using
> >
> > BalancerMember http://127.0.0.1:9000 max=1 keepalive=on acquire=1
> > timeout=1
> >
> > =====
> >
> > I experimented with three mongrel servers, and tied one up for 60
> > seconds at a time calling "sleep" in a handler.
> >
> > Without the "acquire" parameter mod_proxy_balancer's  simple
> > round-robin scheme blocked waiting when it reached a busy
> > BalancerManager, effectively queuing the request.  With "acquire" set
> > the balancer stepped over the busy BalancerMember and continue
> > searching through it's round-robin cycle.
> >
> > So, whether or not Mongrel's accept/close and request queuing are
> > issues, there is a setting in mod_proxy_balancer that prevents either
> > problem from being triggered.
> >
> > At a bare minimum, for a single-threaded process running in Mongrel
> >
> >   BalancerMember http://127.0.0.1:9000 max=1 acquire=1
> >   BalancerMember http://127.0.0.1:9001 max=1 acquire=1
> >   ...
> >
> > With all BalancerMembers busy Apache returns a 503 Server Busy, which
> > is a heck of a lot more appropriate than 502 proxy error.
> >
> > ======
> >
> > It turns out that having Mongrel reap threads before calling accept
> > both queueing in Mongrel and prevents Mongrel's accept/close behavior.
> >
> > But BalancerMembers in mod_proxy_balancer will still need "acquire" to
> > be set -- otherwise proxy client threads will sit around waiting for
> > Mongrel to call accept -- effectively queuing requests in Apache.
> >
> > Since max=1 acquire=1 steps around the queuing problem altogether, the
> > reap-before-accept fix, though more correct, is of no practical benefit.
> >
> > ====
> >
> > With the current Mongrel code,  BalancerMember max > 1 and Mongrel
> > num_processors > 1 triggers the  accept/close bug.
> >
> > Likewise, BalancerMember max >1 with Mongrel num_processors > 1 runs
> > into Mongrel's request queuing....
> >
> > ====
> >
> > Conclusion ---
> >
> > I'd like to see Mongrel return a 503 Server Busy when an incoming
> > request hits the num_processor limit.
> >
> > For practical use, the fix to the problems is in configuring
> > mod_proxy_balancer such that it shields against encountering either
> > issue.
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > Mongrel-users mailing list
> > Mongrel-users at rubyforge.org
> > http://rubyforge.org/mailman/listinfo/mongrel-users
>
>
>


-- 
Evan Weaver
Cloudburst, LLC


More information about the Mongrel-users mailing list