[Mongrel] Debugging high CPU with Mongrel

Dallas DeVries dallas.devries at gmail.com
Wed Dec 13 21:41:27 EST 2006


Alright, so I started clean on a different machine (3Ghz, 2 Gigs RAM)

   -    pen/mongrel (tried prerelease and stable - 7 instances) ->
   apache/mongrel_cluster/mongrel prerelease (9 instances)


   -    debian sarge -> ubuntu stable


Unfortunately I'm still getting hangs. Things run fine for hours at a time
and tens of thousands of page hits.

   - Plenty of RAM left
   - No Swap being used
   - CPU hanging 80-95% on mongrel processes
   - Killing on mongrel processes and restarting will not fix it until I
   delete cache directory

Couple of the many things I tried

   - lsof -i -P | grep CLOSE_WAIT and got no mongrels in the list
   -

   killall -USR1 mongrel_rails
    to run the server in debug mode


** USR1 received, toggling $mongrel_debug_client to true X 9


However this toggle didn't seem to provide any extra information than what
was normally outputted.  I'm not sure if I'm using it wrong or not.  If
anyone has any insight or other suggestions to try it would be greatly
appreciated!

Cheers,
Dallas

On 12/7/06, Dallas DeVries <dallas.devries at gmail.com > wrote:
>
> Hi guys thanks for the response,
>
> Yeah I initially figured that the # of files in the directory was the
> problem and it was the first thing I changed earlier this week.  I changed
> my caching to use many more subdirectories so that I don't have 10,000+
> files in one directory any more.  Right now my largest directory has about
> 2000-3000 files most the rest are much smaller than this. Unfortunately that
> change did not help the problem...I can have upwards of 50,000 files so I'm
> not sure how feasible memcached is at the moment.   Anyways I'm almost done
> with a clean install with apache/mongrel cluster mentioned by Konstantin
> so hopefully that solves my problem.
>
> Cheers,
> Dallas
>
> On 12/6/06, anjan bacchu <anjan.summit at gmail.com> wrote:
>
> >  directory.  When you reach 10-100K files in a single directory you
> > > may notice the slowdown without a stopwatch.  NetApp's own file
> > > system was designed to have a constant access time regardless of the
> > > number of files in the directory (that was their claim to fame
> > > anyway).
> > >
> > > Bottom line is that I wouldn't assume that a directory with 10K+
> > > files is not the main reason for your performance degradation.
> > >
> > > Hope this helps,
> > > Konstantin
> > >
> > >
> > >
> > Hi Konstantin,
> >
> >    You're right on.
> >
> > Dallas  :    ReiserFS is designed to perform well in such a situation of
> > having a lot of small files in the same directory. Can you switch to
> > ReiserFS ? If not, can you switch to memcached ? Are you expiring your
> > sessions ?
> >
> > BR,
> > ~A
> >
> > _______________________________________________
> > Mongrel-users mailing list
> > Mongrel-users at rubyforge.org
> > http://rubyforge.org/mailman/listinfo/mongrel -users<http://rubyforge.org/mailman/listinfo/mongrel-users>
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20061213/50d3015a/attachment.html 


More information about the Mongrel-users mailing list