[Mongrel] problems with apache 2.2 proxying to mongrel cluster

Michael Kovacs kovacs at gmail.com
Tue Jan 2 18:21:50 EST 2007


So I've used the rotating log with lightty/fcgi and the only issue  
was that when it would rotate you would lose all further requests  
because it didn't have a handle to the logfile. So everyday at log  
rotation time I would restart my fcgi processes so that a new  
production.log file would be populated and my app would go along its  
merry way. Apparently I've hit the same thing with mongrel only the  
processes die with no errors anywhere. I still have to determine this  
is the cause of my problem but I will be surprised if it's not.

-Michael
http://javathehutt.blogspot.com

On Jan 2, 2007, at 2:51 PM, Michael D'Auria wrote:

> I didn't know i could use the Logger class like that, pretty sweet.
>
> Do people know if rotating the logs via Logger is the issue or just  
> rotating in general?
>
> .: Michael :.
>
>
> On 1/2/07, Michael Kovacs <kovacs at gmail.com> wrote:
> Hmm... thanks for the suggestion I'll give that a look as well as
> search the archives here.
>
> I do have the logger rotating daily right now:
> config.logger = Logger.new(config.log_path, 'daily')
>
> I think I'll remove that and see if this occurs again. The thing that
> still bothers me though
> is that if one mongrel is broken from this shouldn't they all be? I
> can always count on 1 of 4 mongrels
> working so that every 3rd request is successful :-) In order to
> monitor that I'd have to hit the site twice to
> ensure that I'm not just getting the good mongrel on the first hit.
> Not very fresh.
>
> -Michael
> http://javathehutt.blogspot.com
>
> On Jan 2, 2007, at 1:36 PM, Joey Geiger wrote:
>
> > This may have to do with log rotation. There was a thread about
> > similar issue posted in I believe December. You might want to try
> > searching the mailing list archives.
> >
> > You're probably rotating your logs based on a specific size,  
> which is
> > why it's happening every couple hours, instead of nightly.
> >
> > On 1/2/07, Michael Kovacs <kovacs at gmail.com> wrote:
> >> Hi all,
> >>
> >> I've been having problems with the apache 2.2-mod_proxy_balancer-
> >> mongrel
> >> setup.
> >>
> >> My setup is:
> >>
> >> CentOS 4.3
> >> apache 2.2.3 (compiled from source) with mod_proxy_balancer
> >> mysql 4.1
> >> ruby 1.8.4
> >> mongrel 0.3.14 (I know I need to update but I think this problem is
> >> independent of the mongrel version)
> >> mongrel_cluster 0.2.0
> >> rails_machine 0.1.1
> >>
> >> I have apache setup as per Coda's configuration on his blog
> >> posting from
> >> several months back.
> >> http://blog.codahale.com/2006/06/19/time-for-a-grown-up-server-
> >> rails-mongrel-apache-capistrano-and-you/
> >>
> >> I have 4 mongrels in my cluster.
> >>
> >> Things work fine for periods of time but after several hours of
> >> inactivity
> >> (I think 8 hours or so) I experience oddness where only 1 of the 4
> >> mongrels
> >> is properly
> >> responding. I end up getting a "500 internal server error" 3 out  
> of 4
> >> requests as they round robin from mongrel to mongrel. There is
> >> nothing in
> >> the production
> >> log file nor in the mongrel log. I've reproduced this problem on
> >> my staging
> >> box as well as my production box.
> >>
> >> The last time I reproduced the problem I decided to run "top" and
> >> see what's
> >> going on when I hit the server.
> >> Mongrel does receive every request but mysql is only active on the
> >> 1 request
> >> that works. In the other mongrels it never spikes up in CPU usage.
> >>
> >> Looking at the mysql process list revealed that all of the
> >> processes had
> >> received the "sleep" command but one of the processes is still
> >> working properly. I've played with connection timeouts other than
> >> to set the
> >> timeout in my application's environment
> >> (ActiveRecord::Base.verification_timeout = 14400) as well as the
> >> mysql
> >> interactive_timeout variable but it seems that all the mongrels
> >> should work
> >> or they shouldn't. The fact that 1 out of 4 always works is rather
> >> puzzling
> >> to me.
> >>
> >> Trying a 'killall -USR1 mongrel_rails" to turn debug on simply
> >> killed the 4
> >> threads running mongrel. So now I'm running the cluster in debug
> >> mode and am
> >> going to just let it sit there for several hours until it happens
> >> again and
> >> hopefully get some idea of where the breakdown is happening. I
> >> still think
> >> it has to be a mysql connection timeout but again, the fact that 1
> >> of the 4
> >> always works doesn't lend credence to the timeout theory.
> >>
> >> Has anyone experienced this phenomenon themselves?
> >>
> >> Thanks for any tips/pointers and thanks Zed for all your hard work
> >> with
> >> mongrel.
> >>
> >>
> >>
> >> -Michaelhttp://javathehutt.blogspot.com
> >>
> >>
> >> _______________________________________________
> >> Mongrel-users mailing list
> >> Mongrel-users at rubyforge.org
> >> http://rubyforge.org/mailman/listinfo/mongrel-users
> >>
> >>
> > _______________________________________________
> > Mongrel-users mailing list
> > Mongrel-users at rubyforge.org
> > http://rubyforge.org/mailman/listinfo/mongrel-users
>
> _______________________________________________
> Mongrel-users mailing list
> Mongrel-users at rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users
>
> _______________________________________________
> Mongrel-users mailing list
> Mongrel-users at rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070102/12a41c13/attachment-0001.html 


More information about the Mongrel-users mailing list