[Backgroundrb-devel] 100meg logger worker?

Noah Horton noah at rapouts.com
Mon Feb 4 20:30:11 EST 2008


Hey All,
  First off, thanks for everyone's work on backgroundrb.  My company just started using BackgroundRB for our site and I have a few questions that I could not find documented elsewhere.  Perhaps you can help me out with some info.

 1) I see two 'overhead' ruby processes; backgroundrb start and logger worker.  Both of these are big - 108 megs of memory for each.  Do I have something screwed up in my setup or is this normal?
 2) The old versions of BDRB seemed to have had (based on the docs) a vanilla ruby worker and a rails worker that one could extend from, whereas 1.0 has only metaworker.  Is there a way to get a vanilla ruby worker without loading my whole app?
 3) I seemed to have a memory leak the first time I deployed, where the processes kept growing in a stair-step pattern of 50 meg stairs.  My best hypothesis is that objects passed into ask_worker get kept in some datastructure that never goes away, and thus if you pass any objects there, things balloon; is that correct?  If not, do you have any thoughts on common 'gotchas' that could cause memory growth?

Thanks so much everyone.  I will try to blog the final solutions to the above to make this info available for others.

-Noah Horton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080204/802f4a71/attachment.html 


More information about the Backgroundrb-devel mailing list