[Backgroundrb-devel] best practices for robust workers

Craig Ambrose craig at craigambrose.com
Tue Sep 18 19:30:58 EDT 2007

Hi folks,

I've got some BackgroundRB workers to handle a long running task  
(triggered by a user) that work very well on my staging server, and I  
just wanted to check to see if anyone had any advice before I put it  
into production.

When I start the worker (which performs an import), I write a record  
to my database for each import. The record gets updated by the worker  
to indicate progress, and marked as compete when finished. These  
tasks are idempotent, so I'm not concerned about attempting to do all  
or part of the import multiple times.

It all works very nicely, but I haven't yet tested under high load, I  
have a couple of questions:

How does the pool size work? For example, say I have a pool size of  
5, and I try and create a 6th worker, what happens? Will it get  
queued and then created when one of the existing five is finished?

Does the backgroundrb server crash much? If so, I could restart my  
workers based on their records in the database. Do people do this? As  
a manual step only? I could write a rake task to restart imports that  
I run if the server dies, or perhaps I could hook it in somewhere.  
Does backgroundRB try and restart itself if it fails? If so, is there  
a hook there I can use to restart my workers?



ps: I realise BackgroundRB is not well maintained at the moment. I'm  
happy to help to some extent, as I think it's a great project,  
particularly in the area of submitting patches with tests and fixes  
for any problems that I encounter which affect my work.

Craig Ambrose

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20070919/ca394f56/attachment-0001.html 

More information about the Backgroundrb-devel mailing list