[Backgroundrb-devel] Scheduled workers only run once unless you call self.delete inside the worker

hemant gethemant at gmail.com
Thu Apr 5 19:05:39 EDT 2007


On 3/6/07, Ezra Zygmuntowicz <ezmobius at gmail.com> wrote:
>
>         You are right you need to self.delete at the end of a worker to
> destropy it. Bdrb doesn't auto desatroy any workers currently.
>
> -Ezra
>
>
> On Mar 5, 2007, at 2:24 PM, David Balatero wrote:
>
> > I had a worker scheduled to run every minute with
> > backgroundrb_schedules.yml:
> >
> > ebay_runner:
> >   :class: :ebay_auction_worker
> >   :job_key: :ebay_auction_runner
> >   :trigger_type: :cron_trigger
> >   :trigger_args: 0 * * * * * *
> >
> > This worker posts an auction to eBay from a queue of auctions every
> > minute. I was having a problem where the worker would run ok the
> > first time, but never any subsequent minutes after that. The way I
> > fixed it was adding a call to self.delete at the end of every
> > do_work() method:
> >
> > ----
> > class EbayAuctionWorker < BackgrounDRb::Rails
> >   # Set this worker to run every minute.
> >   attr_accessor :progress, :description
> >
> >   def do_work(args)
> >     # This method is called in it's own new thread when you
> >     # call new worker. args is set to :args
> >
> >     @progress = 0
> >     @description = "Checking for eBay auctions and posting"
> >
> >     logger.info("Checking to post an auction at #{Time.now.to_s}.")
> >     auction = EbayAuction.find(:first, :conditions =>
> > ["auction_status = ? AND post_on < ?", EbayAuction::STATUS_STRINGS
> > [:queued], Time.now])
> >     if auction
> >       logger.info("--- Posting auction: #{auction.title}")
> >
> >       auction.post
> >     else
> >       logger.info("--- No auctions currently need posting.")
> >     end
> >
> >     @progress = 100
> >
> >     logger.info("--- Finished auction check.")
> >
> >     # Exit the thread, see what happens
> >     self.delete
> >   end
> > end
> >
> > EbayAuctionWorker.register
> > --------
> >
> >
> > Does this make sense? I never saw any mention of having to do this
> > with scheduled workers in the documentation. Shouldn't the
> > scheduler automatically clean up the worker with a self.delete,
> > after @progress is >= 100?
> >

He is right to auto destroy the worker. But shouldn't bdrb use same
worker for scheduling run, if method to be called is anything apart
from do_work? it looks like an overhead to recreate the worker for
repeated tasks.

Assume, I am reading from a TCP server, and that would mean
disconnecting and reconnecting on each iteration, not a very good
thing.

Also, I have seen for workers that run at time gap of more than 30
minutes or so, mysql connection gets dropped and worker throws couple
of exceptions and fails to iterate the method execution.

Is there any simple solution to get rid of this behaviour?

-- 
gnufied
-----------
There was only one Road; that it was like a great river: its springs
were at every doorstep, and every path was its tributary.
http://people.inxsasia.com/~hemant


More information about the Backgroundrb-devel mailing list