[Backgroundrb-devel] 100meg logger worker?

hemant gethemant at gmail.com
Tue Feb 5 15:04:32 EST 2008


On Feb 5, 2008 10:42 AM, Ian Smith-Heisters <heisters at greenriver.org> wrote:
> On 2/4/08, Noah Horton <noah at rapouts.com> wrote:
> >   3) I seemed to have a memory leak the first time I deployed, where the
> > processes kept growing in a stair-step pattern of 50 meg stairs.  My best
> > hypothesis is that objects passed into ask_worker get kept in some
> > datastructure that never goes away, and thus if you pass any objects there,
> > things balloon; is that correct?  If not, do you have any thoughts on common
> > 'gotchas' that could cause memory growth?

ask_work doesn't make objects stay around in a data structure.
However, you don't want to pass big objects anyways. Also, go light on
logging if possible.

>
> With an old version I found that I had to explicitly call #kill at the
> end of the task, or the object would sit around forever. I don't think
> there's an equivalent in the new version, or what version you're
> running, so I've no idea if that will solve your question.

New version has "exit". So, if you call exit from inside of your
worker, it will die.


More information about the Backgroundrb-devel mailing list